CG Lecture 26, Thu 2016-11-10

  1. 11_3 Agent based models.

  2. Where I was last week: 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2016)

    1. I gave the keynote talk at the Third ACM SIGSPATIAL 2016 PhD Symposium.
    2. The team of Salles Magalhaes, Wenli Li, Marcus Andrade (Federal University of Vicosa, Brazil) and me came 2nd in the GISCUP programming contest. Here are our talk and paper.
  3. I've mentioned DARPA several times; they funded research into computer graphics starting in the 1970s. So here's some enrichment material on DARPA and autonomous vehicles. This shows why the US is the best in the world at R&D.

    Autonomous vehicles make a marvelous example of a successful engineering project. Many different parts all have to work to make the total project succeed. They include laser rangefinding, image recognition, a road database accurate to the specific lane, and GPS navigation. This is also a model for government - university - industry interaction.

    DARPA (The Defense Advanced Research Projects Agency) started this concept with a contest paying several $million in prizes. (DARPA started connecting computers in different cities with phone lines in 1968. This was the ARPAnet. They funded computer graphics in the 1970s and some early steps in virtual reality in the 1980s.)

    In the 1st contest, the best vehicle failed after about 10 miles. Five vehicles completed the 130 mile course in the 2nd contest. The winning project leader was Sebastian Thrun, a Stanford CS prof. He quit and moved to Google, who has now been funding this for several years.

    Finally, Dr Tony Tether, Director of DARPA when this happened, is an RPI BS EE grad. (He went to Stanford for his PhD).

  4. More programs from Chapter 7.

    I'm not covering all the gruesome details of these programs, but just hitting highlights.

    1. particleDiffusion: buffer ping ponging of 50 particles initially placed randomly and then moving randomly with their previous positions diffused as a texture.

      It renders to a texture instead of to a color buffer. Then it uses the texture.

      There are better descriptions on the web. Search for 'webgl render texture'. I'll work with the textbook code. Jumping back and forth is confusing, partly because they might use different utility files.

      There are often better descriptions on the web than in the textbook. I've considered running a course only with web material. What do you think?

      https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API has good documentation.

    2. pickcube etc. This prints the name of the name that you pick.

      1. This draws a label for each face into a separate buffer.
      2. Then it indexes into it with the coordinates of the mouse.
  5. Stereo viewing. There are a number of techniques for stereo viewing of movies and videos, dating back to the 19th century.

    1. Make the left image red and the right one blue. Use red-blue glasses.

      This decreases the color experience, but the glasses are cheap.

    2. Polarize the left and right images differently. Movie theatres do this.

      The glasses are cheap. There is full color. The projection process is more complicated (= expensive).

    3. The glasses have shutters that block and clear the left and right eyes in turn. In sync, the TV or projector shows the left or right image.

      The projector is cheaper. The glasses are more expensive.

    4. Use a display with a parallax barrier.

      Only one person can view at a time. No glasses are needed.

    5. Use the fact that your eyes see bright objects faster than they see dim objects. The glasses have a simple grey filter over one eye. Beer companies have used this for commercials.

      No special display is needed. This works only with moving objects.

  6. 12_1 Hierarchical Modeling 1.