CG Lecture 19, Wed 2016-10-19

  1. Handwritten notes from Monday.

  2. I updated the syllabus to reflect the delayed due dates for the term project proposal and status reports. Now, they're all on Thurs 9am, the same as the homeworks. However the final presentations and due dates are the same. RPI prohibits classwork from being due after the end of classes.

  3. Help Wanted: Students needed to help create VR App

    "My name is Loren Bass and I am the new Assistant Director of Graduate Admissions. I would love to see if you know any students who would be interested in helping our team create a VR app for our Fall 2017 recruitment efforts. The idea came as I noticed an article about Georgia Tech’s MBA recruitment team (https://www.scheller.gatech.edu/news-events/latest-news/2016/articles/scheller-college-uses-virtual-reality-to-reach-mba-prospects-across-the-us%20.html) and would love to make sure RPI stays competitive in recruiting the top future leaders for our programs. If you know any students who would be interested in helping build this experience during the spring and summer semesters I would sincerely appreciate the help.

    Many thanks,

    Loren Bass"

    (This could be a term project - WRF).

  4. Iclicker questions.

  5. Summary of recent powerpoint slides.

    1. I didn't show WebGL Transformations Angel_UNM_14_5_4.ppt since it is mostly obsolete.

      1. The old OpenGL modelview and projection matrix idea is now deprecated, but is interesting for its subdivision of transformations into two functions.
      2. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
      3. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines.
    2. 6_1 Applying Transformations.

    3. 6_2 Building Models.

      Big idea: separate the geometry from the topology.

    4. The Rotating Square.

      Big idea: render by elements.

    5. 6_3 Classical Viewing.

      Big ideas: parallel (orthographic) and perspective projections. The fine distinctions between subclasses of projections are IMO obsolete.

    6. 6_4 Computer Viewing: Positioning the Camera.

    7. 6_5 Computer Viewing: Projection.

      Big idea: view normalization. We'll see this more in chapter 7.

  6. Notes about the computer programs:

    1. The point is not to teach javascript; it's just the vehicle. Before javascript, I taught C++. Javascript is easier.
    2. One point is to teach a widely used API, i.e., WebGL.
    3. Another point is to teach graphics concepts like projection and viewing, and their APIs, like lookAt.
    4. These concepts will exist in any graphics API.
  7. Programs from chapter 5.

    1. hata shows:
      1. Using a modelview matrix set by lookat and a projection matrix set by ortho
      2. Drawing a mesh with line strips.
    2. hat shows:
      1. Drawing the same points both as a triangle fan and as a line loop.
      2. Setting options to make the lines slightly in front of the triangles.
      3. Doc for depthfunc.
      4. Doc for polygonoffset.
    3. ortho1 and ortho2 show clipping with an interactive orthographic (parallel) projection.
    4. perspective1 and 2 show an interactive perspective projection.
    5. shadow shows a shadow.
  8. New slides:

    1. 7_1 Orthogonal projection matrices.

      Big idea: Given any orthogonal projection and clip volume, we transform the object so that we can view the new object with projection (x,y,z) -> (x,y,0) and clip volume (-1,-1,-1) to (1,1,1) and get the same image. That's a normalization transformation'. See slide 14.

    2. 7_2 Perspective projection matrices and the normalization transformation.

      Big idea: We can do the same with perspective projections. The objects are distorted like in a fun house. See slide 8.

  1. 7_3 Meshes.

    Big ideas:

    1. Meshes are common for representing complicated shapes.
    2. We can plot a 2 1/2 D mesh back to front w/o needing a depth buffer.
    3. Unstructured point clouds with millions of points from laser scanners are common. We can turn them into a mesh by using various heuristics about the expected object.
    4. Drawing the mesh edges a little closer than the faces is a hack that probably works most of the time if the scene isn't too complex. I.e., go ahead and use it, but don't be surprised.