CG Class 23, Mon 2018-11-12

1   Upcoming classes

  1. No class Wed 11/14.
  2. No class Thurs 11/15 because Pres Jackson scheduled a faculty meeting then.
  3. Next class Mon 11/19

2   Chapter 12 slides ctd

  1. 12_3 Graphical Objects and Scene Graphs.

  2. 12_4 Graphical Objects and Scene Graphs 2.

  3. 12_5 Rendering overview.

    At this point we've learned enough WebGL. The course now switches to learn the fundamental graphics algorithms used in the rasterizer stage of the pipeline.

3   Chapter 13 slides

  1. 13_1 Clipping.

    A lot of the material in the clipping slides is obsolete because machines are faster now. However perhaps the rendering is being done on a small coprocessor.

    Big idea (first mentioned on Oct 20): Given any orthogonal projection and clip volume, we transform the object so that we can view the new object with projection (x,y,z) -> (x,y,0) and clip volume (-1,-1,-1) to (1,1,1) and get the same image. That's a normalization transformation'.

4   Textbook programs

  1. Chapter 10 Mandlebrot showing serious computation in the fragment shader.

CG Class 21, Wed 2018-10-31

1   DARPA

  1. I've mentioned DARPA several times; they funded research into computer graphics starting in the 1970s. So here's some enrichment material on DARPA and autonomous vehicles. This shows why the US is the best in the world at R&D.

    Autonomous vehicles make a marvelous example of a successful engineering project. Many different parts all have to work to make the total project succeed. They include laser rangefinding, image recognition, a road database accurate to the specific lane, and GPS navigation. This is also a model for government - university - industry interaction.

    DARPA (The Defense Advanced Research Projects Agency) started this concept with a contest paying several $million in prizes. (DARPA started connecting computers in different cities with phone lines in 1968. This was the ARPAnet. They funded computer graphics in the 1970s and some early steps in virtual reality in the 1980s.)

    In the 1st contest, the best vehicle failed after about 10 miles. Five vehicles completed the 130 mile course in the 2nd contest. The winning project leader was Sebastian Thrun, a Stanford CS prof. He quit and moved to Google, who has now been funding this for several years.

    Finally, Dr Tony Tether, Director of DARPA when this happened, is an RPI BS EE grad. (He went to Stanford for his PhD).

CG Class 20, Mon 2018-10-29

1   Upcoming classes

  1. Yes class this Wed.
  2. No classes next week; I'll be at ACM SIGSPATIAL 2018 in Seattle, presenting a paper at
    1. BIGSPATIAL, and
    2. the GISCUP awards. The entry by Salles Viana Gomes de Magalhães, Ricardo dos Santos Ferreira, and me placed in the top three. We won't know where in the top three until the banquet. We placed 2nd in both 2015 and 2016.

2   OpenGL / WebGL differences

  1. WebGL textures (and other features) are simpler than OpenGL textures. Although the relevant function calls have the same or similar names, the WebGL versions may have fewer options. If you do a web search on a function, the first hits are often to the different OpenGL version.
  2. When searching for something like gl.texImage2D, do not search for the gl; that's just the javascript variable.
  3. Mozilla WebGL_API has a good set of man pages.

3   Texture mapping programs ctd

  1. So far, we've always rendered an image into the display buffer. You can also render into a texture. Then you can use that texture.
  2. Rendering to a texture then rendering with it: webglfundamentals.
  3. We'll look at pickCube3.js again, now in more detail. This prints the name of the face that you pick.
    1. This draws a label for each face into a separate buffer.
    2. Then it indexes into it with the coordinates of the mouse.

#, hawaiiImage.html does image processing in the fragment shader.

  1. particleDiffusion: buffer ping ponging of 50 particles initially placed randomly and then moving randomly with their previous positions diffused as a texture.

    It renders to a texture instead of to a color buffer. Then it uses the texture.

    There are better descriptions on the web. Search for 'webgl render texture'. I'll work with the textbook code. Jumping back and forth is confusing, partly because they might use different utility files.

    There are often better descriptions on the web than in the textbook. I've considered running a course only with web material. What do you think?

    https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API has good documentation.

  2. Several of the programs don't display on my laptop, so I won't show them. They include bumpmap and render.

  3. Cubet: enabling and disabling the depth buffer and drawing translucent objects.

4   Chapter 10 slides ctd

  1. 10_3 Compositing and Blending.

    Some of this material was removed from the current OpenGL, and some of the remaining stuff hasn't been included in WebGL (yet?). So, you have to learn just the high-level stuff, like what this is and why it's interesting.

    1. Compositing, using partial transparancy, creates complex scenes by blending simple components.
      1. It's hard to do accurately in the pipeline.
      2. Drawing objects back-to-front into the color buffer is a good heuristic.
      3. Clamping and accuracy are concerns.
    2. Fog effects provide a depth cue that helps the viewer understand the image.
    3. Anti-aliasing improves the image quality when there are small objects.
  2. 10_4 Imaging Applications.

    1. The GPU is so powerful that people figure ways to use it to do image processing things that it wasn't intended to.
    2. That started the field of GPGPU.
    3. Nvidia watched this happening and added new features to the GPU, like double-precision IEEE-standard floating point computations, to make this easier.
    4. The top line of Nvidia GPUs is designed for scientific computation and doesn't even have video outputs.
    5. The 2nd fastest known supercomputer has 15000 Nvidia GPUs.
  3. 10_5 Rendering the Mandelbrot Set.

    Show the power of GPU programming.

    The program is in Chapter 10.

5   Aliasing and anti-

  1. The underlying image intensity, as a function of x, is a signal, f(x).
  2. When the objects are small, say when they are far away, f(x) is changing fast.
  3. To display the image, the system evaluates f(x) at each pixel. That is, f(x) is sampled at x=0,1,2,3,...
  4. If f(x), when Fourier transformed, has frequencies higher than 1/2 (cycle per pixel), then that sampling is too coarse to capture the signal. See the Nyquist sampling theorem.
  5. When this hi-freq signal is sampled at too low a frequency, then the result computed for the frame buffer will have visual problems.
  6. It's not just that you won't see the hi frequencies. That's obvious.
  7. Worse, you will see fake low frequency signals that were never in the original scene. They are called '''aliases''' of the hi-freq signals.
  8. These artifacts may jump out at you, because of the Mach band effect.
  9. Aliasing can even cause (in NTSC) rapid intensity changes to cause fake colors and vv.
  10. Aliasing can occur with time signals, like a movie of a spoked wagon wheel.
  11. This is like a strobe effect.
  12. The solution is to filter out the hi frequencies before sampling, or sample with a convolution filter instead of sampling at a point. That's called '''anti-aliasing'''.
  13. OpenGl solutions:
    1. Mipmaps.
    2. Compute scene on a higher-resolution frame buffer and average down.
    3. Consider pixels to be squares not points. Compute the fraction of each pixel covered by each object, like a line. Lines have to have finite width.
  14. Refs:
    1. http://en.wikipedia.org/wiki/Aliasing
    2. http://en.wikipedia.org/wiki/Clear_Type
    3. http://en.wikipedia.org/wiki/Wagon-wheel_effect
    4. http://en.wikipedia.org/wiki/Spatial_anti-aliasing (The H Freeman referenced worked at RPI for 10 years).
    5. http://en.wikipedia.org/wiki/Mipmap
    6. http://en.wikipedia.org/wiki/Jaggies

6   Videos - military applications of graphics

US Military's Futuristic Augmented Reality Battlefield - Augmented Immersive Team Trainer (AITT)

Daqri's Smart Helmet Hands On

HoloLens Review: Microsoft's Version of Augmented Reality

US Military's Futuristic Augmented Reality Battlefield - Augmented Immersive Team Trainer (AITT)

'''Modeling and simulation''' is a standard term.

7   DARPA

  1. I've mentioned DARPA several times; they funded research into computer graphics starting in the 1970s. So here's some enrichment material on DARPA and autonomous vehicles. This shows why the US is the best in the world at R&D.

    Autonomous vehicles make a marvelous example of a successful engineering project. Many different parts all have to work to make the total project succeed. They include laser rangefinding, image recognition, a road database accurate to the specific lane, and GPS navigation. This is also a model for government - university - industry interaction.

    DARPA (The Defense Advanced Research Projects Agency) started this concept with a contest paying several $million in prizes. (DARPA started connecting computers in different cities with phone lines in 1968. This was the ARPAnet. They funded computer graphics in the 1970s and some early steps in virtual reality in the 1980s.)

    In the 1st contest, the best vehicle failed after about 10 miles. Five vehicles completed the 130 mile course in the 2nd contest. The winning project leader was Sebastian Thrun, a Stanford CS prof. He quit and moved to Google, who has now been funding this for several years.

    Finally, Dr Tony Tether, Director of DARPA when this happened, is an RPI BS EE grad. (He went to Stanford for his PhD).

8   Stereo viewing

  1. There are a number of techniques for stereo viewing of movies and videos, dating back to the 19th century.

    1. Make the left image red and the right one blue. Use red-blue glasses.

      This decreases the color experience, but the glasses are cheap.

    2. Polarize the left and right images differently. Movie theatres do this.

      The glasses are cheap. There is full color. The projection process is more complicated (= expensive).

    3. The glasses have shutters that block and clear the left and right eyes in turn. In sync, the TV or projector shows the left or right image.

      The projector is cheaper. The glasses are more expensive.

    4. Use a display with a parallax barrier.

      Only one person can view at a time. No glasses are needed.

    5. Use the fact that your eyes see bright objects faster than they see dim objects. The glasses have a simple grey filter over one eye. Beer companies have used this for commercials.

      No special display is needed. This works only with moving objects.

9   My research

  1. RPI article about a completed project that I was part of:

    A Game-Changing Approach: Using the X-Box Kinect as a Sensor to Conduct Centrifuge Research. Team of Rensselaer Researchers Develop New Visualization Method to Evaluate Erosion Quantity and Pattern, November 8, 2016, by Jessica Otitigbe.

  2. Another RPI article, on my student Viana Gomes de Magalhães: The Winning Algorithm, Oct 17, 2016, by Mary Martialay.

10   Updating web pages

  1. You may change a web page after it has been displayed. Editing elements and even adding or deleting them is allowed.
  2. One of the pick programs changes the displayed text in a div to show what you picked.
  3. Mathjax, the package I use to render LaTeX math in a web page, edits the displayed page to replace the math source with the rendered math.
  4. A standard way to make things appear and disappear is to include at the start all the elements that will be wanted later. Then make them visible or invisible. The tool on my homepage that makes sections expand and collapse does that.
  5. Every time something is changed on the page, the locations of everything on the page have to be recomputed. With a slow web browser, like old versions of IE, you can watch this.

CG Class 19, Thu 2018-10-25

2   Texture mapping programs

Programs.

  1. I'm not covering all the gruesome details of these programs, but just hitting highlights.

  2. particleDiffusion: buffer ping ponging of 50 particles initially placed randomly and then moving randomly with their previous positions diffused as a texture.

    It renders to a texture instead of to a color buffer. Then it uses the texture.

    There are better descriptions on the web. Search for 'webgl render texture'. I'll work with the textbook code. Jumping back and forth is confusing, partly because they might use different utility files.

    There are often better descriptions on the web than in the textbook. I've considered running a course only with web material. What do you think?

    https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API has good documentation.

  3. Several of the programs don't display on my laptop, so I won't show them. They include bumpmap and render.

  4. Cubet: enabling and disabling the depth buffer and drawing translucent objects.

  5. pickcube etc. This prints the name of the name that you pick.

    1. This draws a label for each face into a separate buffer.
    2. Then it indexes into it with the coordinates of the mouse.

4   Chapter 10 slides

  1. 10_1 Reflection and Environment Maps.

  2. 10_2 Bump Maps.

    Perturb the surface normals to emulate small details (bumps).

  3. 10_3 Compositing and Blending.

    Some of this material was removed from the current OpenGL, and some of the remaining stuff hasn't been included in WebGL (yet?). So, you have to learn just the high-level stuff, like what this is and why it's interesting.

    1. Compositing, using partial transparancy, creates complex scenes by blending simple components.
      1. It's hard to do accurately in the pipeline.
      2. Drawing objects back-to-front into the color buffer is a good heuristic.
      3. Clamping and accuracy are concerns.
    2. Fog effects provide a depth cue that helps the viewer understand the image.
    3. Anti-aliasing improves the image quality when there are small objects.
  4. 10_4 Imaging Applications.

    1. The GPU is so powerful that people figure ways to use it to do image processing things that it wasn't intended to.
    2. That started the field of GPGPU.
    3. Nvidia watched this happening and added new features to the GPU, like double-precision IEEE-standard floating point computations, to make this easier.
    4. The top line of Nvidia GPUs is designed for scientific computation and doesn't even have video outputs.
    5. The 2nd fastest known supercomputer has 15000 Nvidia GPUs.
  5. 10_5 Rendering the Mandelbrot Set.

    Show the power of GPU programming.

    The program is in Chapter 10.

5   Aliasing and anti-

  1. The underlying image intensity, as a function of x, is a signal, f(x).
  2. When the objects are small, say when they are far away, f(x) is changing fast.
  3. To display the image, the system evaluates f(x) at each pixel. That is, f(x) is sampled at x=0,1,2,3,...
  4. If f(x), when Fourier transformed, has frequencies higher than 1/2 (cycle per pixel), then that sampling is too coarse to capture the signal. See the Nyquist sampling theorem.
  5. When this hi-freq signal is sampled at too low a frequency, then the result computed for the frame buffer will have visual problems.
  6. It's not just that you won't see the hi frequencies. That's obvious.
  7. Worse, you will see fake low frequency signals that were never in the original scene. They are called '''aliases''' of the hi-freq signals.
  8. These artifacts may jump out at you, because of the Mach band effect.
  9. Aliasing can even cause (in NTSC) rapid intensity changes to cause fake colors and vv.
  10. Aliasing can occur with time signals, like a movie of a spoked wagon wheel.
  11. This is like a strobe effect.
  12. The solution is to filter out the hi frequencies before sampling, or sample with a convolution filter instead of sampling at a point. That's called '''anti-aliasing'''.
  13. OpenGl solutions:
    1. Mipmaps.
    2. Compute scene on a higher-resolution frame buffer and average down.
    3. Consider pixels to be squares not points. Compute the fraction of each pixel covered by each object, like a line. Lines have to have finite width.
  14. Refs:
    1. http://en.wikipedia.org/wiki/Aliasing
    2. http://en.wikipedia.org/wiki/Clear_Type
    3. http://en.wikipedia.org/wiki/Wagon-wheel_effect
    4. http://en.wikipedia.org/wiki/Spatial_anti-aliasing (The H Freeman referenced worked at RPI for 10 years).
    5. http://en.wikipedia.org/wiki/Mipmap
    6. http://en.wikipedia.org/wiki/Jaggies

6   Videos - military applications of graphics

US Military's Futuristic Augmented Reality Battlefield - Augmented Immersive Team Trainer (AITT)

Daqri's Smart Helmet Hands On

HoloLens Review: Microsoft's Version of Augmented Reality

US Military's Futuristic Augmented Reality Battlefield - Augmented Immersive Team Trainer (AITT)

'''Modeling and simulation''' is a standard term.

7   DARPA

  1. I've mentioned DARPA several times; they funded research into computer graphics starting in the 1970s. So here's some enrichment material on DARPA and autonomous vehicles. This shows why the US is the best in the world at R&D.

    Autonomous vehicles make a marvelous example of a successful engineering project. Many different parts all have to work to make the total project succeed. They include laser rangefinding, image recognition, a road database accurate to the specific lane, and GPS navigation. This is also a model for government - university - industry interaction.

    DARPA (The Defense Advanced Research Projects Agency) started this concept with a contest paying several $million in prizes. (DARPA started connecting computers in different cities with phone lines in 1968. This was the ARPAnet. They funded computer graphics in the 1970s and some early steps in virtual reality in the 1980s.)

    In the 1st contest, the best vehicle failed after about 10 miles. Five vehicles completed the 130 mile course in the 2nd contest. The winning project leader was Sebastian Thrun, a Stanford CS prof. He quit and moved to Google, who has now been funding this for several years.

    Finally, Dr Tony Tether, Director of DARPA when this happened, is an RPI BS EE grad. (He went to Stanford for his PhD).

8   Stereo viewing

  1. There are a number of techniques for stereo viewing of movies and videos, dating back to the 19th century.

    1. Make the left image red and the right one blue. Use red-blue glasses.

      This decreases the color experience, but the glasses are cheap.

    2. Polarize the left and right images differently. Movie theatres do this.

      The glasses are cheap. There is full color. The projection process is more complicated (= expensive).

    3. The glasses have shutters that block and clear the left and right eyes in turn. In sync, the TV or projector shows the left or right image.

      The projector is cheaper. The glasses are more expensive.

    4. Use a display with a parallax barrier.

      Only one person can view at a time. No glasses are needed.

    5. Use the fact that your eyes see bright objects faster than they see dim objects. The glasses have a simple grey filter over one eye. Beer companies have used this for commercials.

      No special display is needed. This works only with moving objects.

9   My research

  1. RPI article about a completed project that I was part of:

    A Game-Changing Approach: Using the X-Box Kinect as a Sensor to Conduct Centrifuge Research. Team of Rensselaer Researchers Develop New Visualization Method to Evaluate Erosion Quantity and Pattern, November 8, 2016, by Jessica Otitigbe.

  2. Another RPI article, on my student Viana Gomes de Magalhães: The Winning Algorithm, Oct 17, 2016, by Mary Martialay.

10   Updating web pages

  1. You may change a web page after it has been displayed. Editing elements and even adding or deleting them is allowed.
  2. One of the pick programs changes the displayed text in a div to show what you picked.
  3. Mathjax, the package I use to render LaTeX math in a web page, edits the displayed page to replace the math source with the rendered math.
  4. A standard way to make things appear and disappear is to include at the start all the elements that will be wanted later. Then make them visible or invisible. The tool on my homepage that makes sections expand and collapse does that.
  5. Every time something is changed on the page, the locations of everything on the page have to be recomputed. With a slow web browser, like old versions of IE, you can watch this.