CG Class 24, Thurs 2017-11-02

1   Next week

  1. Prof Radke will talk on Mon.
  2. No class on Thurs.

2   Extended due dates

  1. Homework 6 now due today.
  2. Term project progress report due Mon.

3   Vive VR and Ricoh Theta V available

Salles now has the Vive running. In his opinion, it's better than the MS Holodeck.

5   Why textures are hard to do

  1. ... although the idea is simple.
  2. You want to take each texel on each object, find which pixel it would draw on to, and draw it there.
  3. That's too expensive. Because of the projection, the projection of one texel is almost never the same size as a pixel.
    1. Sometimes several texels will project to the same pixel, so they have to be averaged together to get the pixel color.
    2. At other times, one texel will project over several pixels, so you want to blend pixel colors smoothly from one texel to the next texel.
    3. Most texels will be invisible in the final image. Even with today's computers, this wastes a lot of computing power.
  4. So, this is how it's really implemented.
    1. For each visible fragment, i.e., pixel, find which texel would draw there.
    2. This requires a backwards version of the graphics pipeline.
    3. The interesting surfaces are curved, where each parameter pair (u,v) uses a bicubic polynomial to generate an (x,y,z). You have to invert that.
    4. All this has to be computed at hundreds of millions of pixels per second.
    5. It usually utilizes special texture mapping hardware for speed.

CG Class 23, Mon 2017-10-30

1   Next week

  1. Prof Radke will talk on Mon.
  2. No class on Thurs.

2   Vive VR and Ricoh Theta V available

ECSE has bought a Vive VR fully immersive first-person experience system and a a Ricoh Theta V 360 degree camera for this class to play with.

You will be able to use the Vive in my lab in JEC6115 and to borrow the Ricoh. Contact the TAs to make a reservation. They will split the access among the interested people.

How many people are interested (show of hands)?

4   Computing surface normals

  1. For a curved surface, the normal vector at a point on the surface is the cross product of two tangent vectors at that point. They must not be parallel to each other.
  2. If it's a parametric surface, partial derivatives are tangent vectors.
  3. A mesh is a common way to approximate a complicated surface.
  4. For a mesh of flat (planar) pieces (facets):
    1. Find the normal to each facet.
    2. Average the normals of the facets around each vertex to get a normal vector at each vertex.
    3. Apply Phong (or Gouraud) shading from those vertex normals.

6   Textures

Today's big new idea.

  1. Textures started as a way to paint images onto polygons to simulate surface details. They add per-pixel surface details without raising the geometric complexity of a scene.
  2. That morphed into a general array data format with fast I/O.
  3. If you read a texture with indices that are fractions, the hardware interpolates a value, using one of several algorithms. This is called '''sampling'''. E.g., reading T[1.1,2] returns something like .9*T[1,2]+.1*T[2,2].
  4. Textures involve many coordinate systems:
    1. (x,y,z,w) - world.
    2. (u,v) - parameters on one polygon
    3. (s,t) - location in a texture.
  5. Aliasing is also important.

7   Chapter 9 slides

  1. 9_1 Buffers.

    Ignore anything marked old or deprecated.

    Not a lot of content in this file.

  2. 9_2 Bitblt.

  3. 9_3 Texture mapping.

    Start of a big topic.

CG Class 22, Thurs 2017-10-26

Table of .. contents::

2   Chapter 6 programs

  1. Chapter 6

    1. wireSphere: wire frame of recursively generated sphere
    2. shadedCube: rotating cube with modified Phong shading
    3. shadedSphere1: shaded sphere using true normals and per vertex shading
    4. shadedSphere2: shaded sphere using true normals and per fragment shading
    5. shadedSphere3: shaded sphere using vertex normals and per vertex shading
    6. shadedSphere4: shaded sphere using vertex normals and per fragment shading
    7. shadedSphereEyeSpace and shadedSphereObjectSpace show how lighting computations can be carried out in these spaces
  2. Summary of the new part of shadedCube:

    1. var nBuffer = gl.createBuffer();

      Reserve a buffer id.

    2. gl.bindBuffer( gl.ARRAY_BUFFER, nBuffer );

      1. Create that buffer as a buffer of data items, one per vertex.
      2. Make it the current buffer for future buffer operations.
    3. gl.bufferData( gl.ARRAY_BUFFER, flatten(normalsArray), gl.STATIC_DRAW );

      Write a array of normals, flattened to remove metadata, into the current buffer.

    4. var vNormal = gl.getAttribLocation( program, "vNormal" );

      Get the address of the shader (GPU) variable named "vNormal".

    5. gl.vertexAttribPointer( vNormal, 3, gl.FLOAT, false, 0, 0 );

      Declare that the current buffer contains 3 floats per vertex.

    6. gl.enableVertexAttribArray( vNormal );

      Enable the array for use.

    7. (in the shader) attribute vec3 vNormal;

      Declare the variable in the vertex shader that will receive each row of the javascript array as each vertex is processed.

    8. The whole process is repeated with the vertex positions.

      Note that the variable with vertex positions is not hardwired here. You pass in whatever data you want, and your shader program uses it as you want.

CG Class 21, Mon 2017-10-23

1   Homework 6 online

Homework 6 is online; due in a week.

3   Slides

  1. 7_3 Meshes.

    Big ideas:

    1. Meshes are common for representing complicated shapes.
    2. We can plot a 2 1/2 D mesh back to front w/o needing a depth buffer.
    3. Unstructured point clouds with millions of points from laser scanners are common. We can turn them into a mesh by using various heuristics about the expected object.
    4. Drawing the mesh edges a little closer than the faces is a hack that probably works most of the time if the scene isn't too complex. I.e., go ahead and use it, but don't be surprised.
  2. 7_4 Shadows.

    Big idea:

    1. If there is one light source, we can compute visibility with it as the viewer.
    2. Somehow mark the visible (i.e., lit) and hidden (i.e., shadowed) portions of the objects.
    3. Then recompute visibility from the real viewer and shade the visible objects depending on whether they are lit or shadowed.
    4. This works for a small number of lights.
    5. This capability is in OpenGL but not in WebGL, so this topic is enrichment only (= will not be examined).
  3. 7_5 Lighting and shading I.

Big big topic.

4   Lighting etc

  1. This Woman sees 100 times more colors than the average person.
  2. Phong lighting model: The total light at a pixel is the sum of
    1. Incoming ambient light times ambient reflectivity of the material at the pixel,
    2. Incoming diffuse light times diffuse reflectivity times a factor for the light source being low on the horizon,
    3. Incoming specular light times specular reflectivity times a factor for the eye not being aligned to the reflection vector, with an exponent for the material shininess,
    4. Light emitted by the material.
    5. That is not intended to be completely physical, but to give the programmer lots of parameters to tweak.
  3. In OpenGL you can do several possible levels of shading. Pick one of the following choices. Going down the list makes the shading better but costlier.
    1. Shade the whole polygon to be the color that you specified for one of the vertices.
    2. Bilinearly shade the polygon, triangle by triangle, from the colors you specified for its vertices.
    3. Use the Phong lighting model to compute the color of each vertex from that vertex's normal. Bilinearly interpolate that color over the polygon. That is called Gouraud shading.
    4. Bilinearly interpolate a surface normal at each pixel from normals that you specified at each vertex. Then normalize the length of each interpolated normal vector. Evaluate the Phong lighting model at each pixel from the interpolated normal. That is called Phong shading.
  4. Maureen Stone: Representing Colors as 3 Numbers (enrichment)
  5. Why do primary schools teach that the primary colors are Red Blue Yellow?

5   Possible spring courses

  1. ECSE-4740 Applied Parallel Computing for Engineers. Its instructor is excellent. There is also a 6000-level version.
  2. An independent Computer Graphics reading course directed by me.

6   RPI's long history in Computer Graphics

  1. Curtis Priem.
  2. Mike Wozny, Head of ECSE, was one of the founders of IEEE Computer Graphics and Applications, which printed that article by Maureen Stone.
  3. William Lorensen, co-inventor of marching cubes, earned his BS and MS here at RPI.

CG Homework 6, due Mon 2017-10-30 2359

  1. (4 pts) Use the homogeneous matrix to project homogeneous points onto the plane x+3y+2z=4, with COP at the origin. What does the point (1,2,4,5) project to? Give the answer as a Cartesian point.

  2. (4 pts) Repeat the previous question with the COP changed to (1,1,1,1).

  3. (6 pts) Do exercise 5.6 on page 272 of the text,

  4. (6 pts) This question will take some thinking).

    Imagine that you have an infinitely large room illuminated by one infinitely long row of point lights. This figure shows a side view of the room.

    The lights are h above the floor and are 1 meter from each other. Assume that the ceiling above the lights is black and that no light reflects off of anything.

    An object at distance d from a light gets illuminated with a brightness \(\frac{1}{d^2}\).

    Each point on the floor is illuminated by all the lights, but more brightly by the closer lights.

    A point p directly below a light will be a little brighter than a point q halfway between two such points. That is the problem --- we want the floor (at least the part directly below the line of lights) to be evenly lit, at least within 1%.

    However, the higher the line of lights, the more evenly the floor will be lit.

    Your question is to tell us what is the minimum value for h so that the line of the floor below the line of lights is evenly lit within 5%.

    ../../images/hw-lights.png

    E.g., the brightness at p is

    \(\sum_{i=-\infty}^{\infty} \;\; \frac{1}{\left(h^2+i^2\right)}\)

(Total: 20 points.)

CG Class 20, Thurs 2017-10-19

1   IOCCC programs

local copy of some of them.

Fun examples of what you can do in a few lines of code.

2   Comments on recent powerpoint slides

  1. I didn't show WebGL Transformations Angel_UNM_14_5_4.ppt in detail since it is mostly obsolete.
    1. The old OpenGL modelview and projection matrix idea is now deprecated, but is interesting for its subdivision of transformations into two functions.
    2. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
    3. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines.

4   More Chapter 6 slides

  1. 6_4 Computer Viewing: Positioning the Camera.

  2. 6_5 Computer Viewing: Projection.

    Big idea: view normalization. We'll see this more in chapter 7.

5   Chapter 7 slides

  1. 7_1 Orthogonal projection matrices.

    Big idea: Given any orthogonal projection and clip volume, we transform the object so that we can view the new object with projection (x,y,z) -> (x,y,0) and clip volume (-1,-1,-1) to (1,1,1) and get the same image. That's a normalization transformation'. See slide 14.

  2. 7_2 Perspective projection matrices and the normalization transformation.

    Big idea: We can do the same with perspective projections. The objects are distorted like in a fun house. See slide 8.

CG Class 18, Mon 2017-10-16

2   Textbook Slides

  1. 4_4 Position Input

  2. 4_5 Picking

  3. 4_6 Geometry

  4. Picking

    The user selects an object on the display with the mouse. How can we tell which object was selected? This is a little tricky.

    E.g., It's not enough to know what line of code was executed to draw that pixel (and even determining that isn't trivial). That line may be in a loop in a subroutine called from several places. We want to know, sort of, the whole current state of the program. Also, if that pixel was drawn several times, we want to know about only the last time that that pixel changed.

    Imagine that the image shows cars in the north lot. You select a pixel that's part of the image of a wheel nut. However there are many wheel nuts in the image, perhaps all drawn with the same subroutine. You might want to know that you selected the 4th wheel nut on the right front wheel of the 2nd car on the left of the 3rd row from the front.

    There are various messy methods, which are all now deprecated. The new official way is to use the color buffer to code the objects.

    1. Decide what granularity you want for the selection. E.g, in the north lot, you may not care about specific wheel nuts, but just about wheels. You want to know that you selected the right front wheel of the 2nd car on the left of the 3rd row from the front.
    2. Assign each wheel in the lot a different id number.
    3. When drawing the scene, use each wheel's id number for its color.
    4. Look at the contents of the color buffer pixel to know what you picked.
    5. Perhaps really store this info in a 2nd buffer parallel to the color buffer, so the image will look better.

4   Homogeneous coords ctd

  1. The big topic for Chapter 5 is homogeneous coordinates. They allow all the common transformations - translation, rotation, scaling, projection - to be expressed as a multiplication by a 4x4 matrix.

    My take on homogeneous coordinates.

5   Term project proposals

  1. See the syllabus.
  2. I moved the due date back to next Mon.
  3. This replaces the homework.
  4. Submit on LMS.

6   Research paper for ECSE-6964

  1. If you are registered for the grad version of this class, then in addition to the term project, you must also write a research paper; see the syllabus.
  2. Simultaneously with the term project progress reports, please submit research paper progress reports.

CG Class 16, Wed 2017-10-11

Review before tomorrow's midterm exam

  1. The exam may contain some recycled homework questions.
  2. The exam will contain material on rotations, but no quaternions or homogeneous coordinates.
  3. I got questions from:
    1. old exams
    2. my handwritten notes
    3. the class wiki
    4. my web pages mentioned in the class wiki, unless they were marked enrichment
    5. one or two things I said in class
  4. Here are several old exams, with some solutions. This year's topics will be slightly different, but will be largely the same. OTOH, since there are a finite number of electrons in the universe and they say that recycling is good, I'll recycle many of these questions.
    1. Midterm F2016.
    2. Midterm F2014, Midterm F2014 Solution
    3. Midterm F2013, Midterm F2013 Solution
    4. Midterm F2012, Midterm F2012 Solution
  5. You may bring in one 2-sided 8.5"x11" paper with notes. You may not share the material with each other during the exam. No collaboration or communication (except with the staff) is allowed.

Handwritten notes from class

Here.

CG Class 15, Tues 2017-10-10

2   Prof Radke talk on Nov 6

On Monday 2017-11-6, Prof Rich Radke will speak to the class while I am at ACM SIGSPATIAL .

He has written Computer Vision for Visual Effects.

3   Textbook Slides

We'll continue with the textbook powerpoint slides.

The big topic for Chapter 5 is homogeneous coordinates. They allow all the common transformations - translation, rotation, scaling, projection - to be expressed as a multiplication by a 4x4 matrix.

We doo the chapter 5 slides really quickly.

  1. 5_2 Homogeneous coordinates

    My take on homogeneous coordinates.

  2. 5_3 Transformations.

  3. 5_4 WebGL transformations

  4. 5_5 Applying transfomations

  5. WebGL Transformations Angel_UNM_14_5_4.ppt since it is mostly obsolete.

    1. The old OpenGL modelview and projection matrix idea is now deprecated, but is interesting for its subdivision of transformations into two functions.
    2. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
    3. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines.
  6. 6_1 Building Models.

    Big idea: separate the geometry from the topology.

  7. The Rotating Square.

    Big idea: render by elements.

  8. 6_3 Classical Viewing.

    Big ideas: parallel (orthographic) and perspective projections. The fine distinctions between subclasses of projections are IMO obsolete.

4   Angel programs - Chapter 5

Chapter 5

  1. hata shows:
    1. Using a modelview matrix set by lookat and a projection matrix set by ortho
    2. Drawing a mesh with line strips.
  2. hat shows:
    1. Drawing the same points both as a triangle fan and as a line loop.
    2. Setting options to make the lines slightly in front of the triangles.
    3. Doc for depthfunc.
    4. Doc for polygonoffset.
  3. ortho1 and ortho2 show clipping with an interactive orthographic (parallel) projection.
  4. perspective1 and 2 show an interactive perspective projection.
  5. shadow shows a shadow.
  6. Notes about the computer programs:
    1. The point is not to teach javascript; it's just the vehicle. Before javascript, I taught C++. Javascript is easier.
    2. One point is to teach a widely used API, i.e., WebGL.
    3. Another point is to teach graphics concepts like projection and viewing, and their APIs, like lookAt.
    4. These concepts will exist in any graphics API.