CG Class 18, Wed 2018-10-24

2   Why textures are hard to do

  1. ... although the idea is simple.
  2. You want to take each texel on each object, find which pixel it would draw on to, and draw it there.
  3. That's too expensive. Because of the projection, the projection of one texel is almost never the same size as a pixel.
    1. Sometimes several texels will project to the same pixel, so they have to be averaged together to get the pixel color.
    2. At other times, one texel will project over several pixels, so you want to blend pixel colors smoothly from one texel to the next texel.
    3. Most texels will be invisible in the final image. Even with today's computers, this wastes a lot of computing power.
  4. So, this is how it's really implemented.
    1. For each visible fragment, i.e., pixel, find which texel would draw there.
    2. This requires a backwards version of the graphics pipeline.
    3. The interesting surfaces are curved, where each parameter pair (u,v) uses a bicubic polynomial to generate an (x,y,z). You have to invert that.
    4. All this has to be computed at hundreds of millions of pixels per second.
    5. It usually utilizes special texture mapping hardware for speed.

CG Class 17, Mon 2018-10-22

1   Chapter 6 programs

  1. Chapter 6

    1. wireSphere: wire frame of recursively generated sphere
    2. shadedCube: rotating cube with modified Phong shading
    3. shadedSphere1: shaded sphere using true normals and per vertex shading
    4. shadedSphere2: shaded sphere using true normals and per fragment shading
    5. shadedSphere3: shaded sphere using vertex normals and per vertex shading
    6. shadedSphere4: shaded sphere using vertex normals and per fragment shading
    7. shadedSphereEyeSpace and shadedSphereObjectSpace show how lighting computations can be carried out in these spaces
  2. Summary of the new part of shadedCube:

    1. var nBuffer = gl.createBuffer();

      Reserve a buffer id.

    2. gl.bindBuffer( gl.ARRAY_BUFFER, nBuffer );

      1. Create that buffer as a buffer of data items, one per vertex.
      2. Make it the current buffer for future buffer operations.
    3. gl.bufferData( gl.ARRAY_BUFFER, flatten(normalsArray), gl.STATIC_DRAW );

      Write a array of normals, flattened to remove metadata, into the current buffer.

    4. var vNormal = gl.getAttribLocation( program, "vNormal" );

      Get the address of the shader (GPU) variable named "vNormal".

    5. gl.vertexAttribPointer( vNormal, 3, gl.FLOAT, false, 0, 0 );

      Declare that the current buffer contains 3 floats per vertex.

    6. gl.enableVertexAttribArray( vNormal );

      Enable the array for use.

    7. (in the shader) attribute vec3 vNormal;

      Declare the variable in the vertex shader that will receive each row of the javascript array as each vertex is processed.

    8. The whole process is repeated with the vertex positions.

      Note that the variable with vertex positions is not hardwired here. You pass in whatever data you want, and your shader program uses it as you want.

2   Available graphics HW

I have (courtesy of ECSE) for this class to play with:

  1. Vive VR fully immersive first-person experience system, in my lab JEC6115.
  2. Ricoh Theta V 360 degree camera, available to borrow.
  3. Older Okulus Rift DK2, available to borrow.
  4. Nvidia GPUs on geoxeon.ecse: GM200 GeForce GTX Titan X, GK110GL Tesla K20Xm.
  5. Nvidia GPU on parallel.ecse: GeForce GTX 1080.

See me if you're interested.

4   Computing surface normals

  1. For a curved surface, the normal vector at a point on the surface is the cross product of two tangent vectors at that point. They must not be parallel to each other.
  2. If it's a parametric surface, partial derivatives are tangent vectors.
  3. A mesh is a common way to approximate a complicated surface.
  4. For a mesh of flat (planar) pieces (facets):
    1. Find the normal to each facet.
    2. Average the normals of the facets around each vertex to get a normal vector at each vertex.
    3. Apply Phong (or Gouraud) shading from those vertex normals.

5   Textures

Today's big new idea.

  1. Textures started as a way to paint images onto polygons to simulate surface details. They add per-pixel surface details without raising the geometric complexity of a scene.
  2. That morphed into a general array data format with fast I/O.
  3. If you read a texture with indices that are fractions, the hardware interpolates a value, using one of several algorithms. This is called sampling. E.g., reading T[1.1,2] returns something like .9*T[1,2]+.1*T[2,2].
  4. Textures involve many coordinate systems:
    1. (x,y,z,w) - world.
    2. (u,v) - parameters on one polygon
    3. (s,t) - location in a texture.
  5. Aliasing is also important.

6   Chapter 9 slides

  1. 9_1 Buffers.

    Ignore anything marked old or deprecated.

    Not a lot of content in this file.

  2. 9_2 Bitblt.

CG Class 16, Thurs 2018-10-18

1   Chapter 7 slides ctd

  1. 7_3 Meshes.

    Big ideas:

    1. Meshes are common for representing complicated shapes.
    2. We can plot a 2 1/2 D mesh back to front w/o needing a depth buffer.
    3. Unstructured point clouds with millions of points from laser scanners are common. We can turn them into a mesh by using various heuristics about the expected object.
    4. Drawing the mesh edges a little closer than the faces is a hack that probably works most of the time if the scene isn't too complex. I.e., go ahead and use it, but don't be surprised.
  2. 7_4 Shadows.

    Big idea:

    1. If there is one light source, we can compute visibility with it as the viewer.
    2. Somehow mark the visible (i.e., lit) and hidden (i.e., shadowed) portions of the objects.
    3. Then recompute visibility from the real viewer and shade the visible objects depending on whether they are lit or shadowed.
    4. This works for a small number of lights.
    5. This capability is in OpenGL but not in WebGL, so this topic is enrichment only (= will not be examined).
  3. 7_5 Lighting and shading I.

Big big topic.

2   Lighting etc

  1. This Woman sees 100 times more colors than the average person.
  2. Phong lighting model: The total light at a pixel is the sum of
    1. Incoming ambient light times ambient reflectivity of the material at the pixel,
    2. Incoming diffuse light times diffuse reflectivity times a factor for the light source being low on the horizon,
    3. Incoming specular light times specular reflectivity times a factor for the eye not being aligned to the reflection vector, with an exponent for the material shininess,
    4. Light emitted by the material.
    5. That is not intended to be completely physical, but to give the programmer lots of parameters to tweak.
  3. In OpenGL you can do several possible levels of shading. Pick one of the following choices. Going down the list makes the shading better but costlier.
    1. Shade the whole polygon to be the color that you specified for one of the vertices.
    2. Bilinearly shade the polygon, triangle by triangle, from the colors you specified for its vertices.
    3. Use the Phong lighting model to compute the color of each vertex from that vertex's normal. Bilinearly interpolate that color over the polygon. That is called Gouraud shading.
    4. Bilinearly interpolate a surface normal at each pixel from normals that you specified at each vertex. Then normalize the length of each interpolated normal vector. Evaluate the Phong lighting model at each pixel from the interpolated normal. That is called Phong shading.
  4. Maureen Stone: Representing Colors as 3 Numbers (enrichment)
  5. Why do primary schools teach that the primary colors are Red Blue Yellow?

3   Possible spring courses

  1. ECSE-4740 Applied Parallel Computing for Engineers. Its instructor is excellent. There is also a 6000-level version.
  2. An independent Computer Graphics reading course directed by me.

4   RPI's long history in Computer Graphics

  1. Curtis Priem.
  2. Mike Wozny, Head of ECSE, was one of the founders of IEEE Computer Graphics and Applications, which printed that article by Maureen Stone.
  3. William Lorensen, co-inventor of marching cubes, earned his BS and MS here at RPI.

CG Class 15, Wed 2018-10-17

1   Angel programs - Chapter 5 ctd

Chapter 5

  1. hata shows:
    1. Using a modelview matrix set by lookat and a projection matrix set by ortho
    2. Drawing a mesh with line strips.
  2. hat shows:
    1. Drawing the same points both as a triangle fan and as a line loop.
    2. Setting options to make the lines slightly in front of the triangles.
    3. Doc for depthfunc.
    4. Doc for polygonoffset.
  3. ortho1 and ortho2 show clipping with an interactive orthographic (parallel) projection.
  4. perspective1 and 2 show an interactive perspective projection.
  5. shadow shows a shadow.
  6. Notes about the computer programs:
    1. The point is not to teach javascript; it's just the vehicle. Before javascript, I taught C++. Javascript is easier.
    2. One point is to teach a widely used API, i.e., WebGL.
    3. Another point is to teach graphics concepts like projection and viewing, and their APIs, like lookAt.
    4. These concepts will exist in any graphics API.

3   Term project proposals

  1. See the syllabus.
  2. I moved the due dates back.
  3. The proposal will be homework 6.
  4. Submit on LMS.

4   Research paper for ECSE-6964

  1. If you are registered for the grad version of this class, then in addition to the term project, you must also write a research paper; see the syllabus.
  2. Simultaneously with the term project progress reports, please submit research paper progress reports.

5   IOCCC programs

local copy of some of them.

Fun examples of what you can do in a few lines of code.

6   Comments on recent powerpoint slides

  1. I didn't show WebGL Transformations Angel_UNM_14_5_4.ppt in detail since it is mostly obsolete.
    1. The old OpenGL modelview and projection matrix idea is now deprecated, but is interesting for its subdivision of transformations into two functions.
    2. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
    3. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines.

7   More Chapter 6 slides

  1. 6_4 Computer Viewing: Positioning the Camera.

  2. 6_5 Computer Viewing: Projection.

    Big idea: view normalization. We'll see this more in chapter 7.

8   Chapter 7 slides

  1. 7_1 Orthogonal projection matrices.

    Big idea: Given any orthogonal projection and clip volume, we transform the object so that we can view the new object with projection (x,y,z) -> (x,y,0) and clip volume (-1,-1,-1) to (1,1,1) and get the same image. That's a normalization transformation'. See slide 14.

  2. 7_2 Perspective projection matrices and the normalization transformation.

    Big idea: We can do the same with perspective projections. The objects are distorted like in a fun house. See slide 8.

CG Homework 7, due Mon 2018-11-05 2359

  1. (4 pts) Use the homogeneous matrix to project homogeneous points onto the plane x+3y+4z=5, with COP at the origin. What does the point (1,2,4,5) project to? Give the answer as a Cartesian point.

  2. (4 pts) Repeat the previous question with the COP changed to (1,1,1,1).

  3. (6 pts) Do exercise 5.6 on page 272 of the text,

  4. (6 pts) This question will take some thinking).

    Imagine that you have an infinitely large room illuminated by one infinitely long row of point lights. This figure shows a side view of the room.

    The lights are h above the floor and are 1 meter from each other. Assume that the ceiling above the lights is black and that no light reflects off of anything.

    An object at distance d from a light gets illuminated with a brightness \(\frac{1}{d^2}\).

    Each point on the floor is illuminated by all the lights, but more brightly by the closer lights.

    A point p directly below a light will be a little brighter than a point q halfway between two such points. That is the problem --- we want the floor (at least the part directly below the line of lights) to be evenly lit, at least within 1%.

    However, the higher the line of lights, the more evenly the floor will be lit.

    Your question is to tell us what is the minimum value for h so that the line of the floor below the line of lights is evenly lit within 10%.

    ../../images/hw-lights.png

    E.g., the brightness at p is

    \(\sum_{i=-\infty}^{\infty} \;\; \frac{1}{\left(h^2+i^2\right)}\)

(Total: 20 points.)

CG Class 14, Mon 2018-10-15

1   Midterm exam

  1. aka Class 13.
  2. The exam and answers are now online. Sorry I didn't get them online before Thurs.
  3. Lingyu writes: Highest score: 40/40.
    Lowest score: 20. Average score: 32. Median score: 33.

2   Quaternions

  1. Finish off 3D rotations with quaternions.
  2. It's easy to find the quaternion rotation for a given axis and angle.
  3. and vice versa.
  4. Combining two rotations and finding the axis and angle of the equivalent single rotation is easy.

3   3D interpolation

  1. My note on 3D Interpolation, which is better than the textbook section on animating with Euler angles.
  2. The problem with stepping the 3 Euler angles together is that the combo rotation might not be smooth.

4   Shadertoy

Shadertoy.com is a very nice use of WebGL. The co-creator of the website has an account named "iq", searching for that account brings up some renders that really push Webgl to its limits in really pretty ways (he used to work for Pixar and Siggraph) - Seretsi Khabane Lekena '17.

You need a machine with reasonable graphics to show these. The TP Yoga that I bring to class is not one of them.

5   Upcoming schedule

There will be normal lectures for the next two Weds.

6   Textbook Slides

Here are links to the next few textbook powerpoint slides. We do the chapter 5 slides really quickly.

  1. 5_4 WebGL transformations

  2. 5_5 Applying transfomations

  3. WebGL Transformations Angel_UNM_14_5_4.ppt since it is mostly obsolete.

    1. The old OpenGL modelview and projection matrix idea is now deprecated, but is interesting for its subdivision of transformations into two functions.
    2. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
    3. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines.
  4. 6_1 Building Models.

    Big idea: separate the geometry from the topology.

  5. The Rotating Square.

    Big idea: render by elements.

  6. 6_3 Classical Viewing.

    Big ideas: parallel (orthographic) and perspective projections. The fine distinctions between subclasses of projections are IMO obsolete.

7   Angel programs - Chapter 5

Chapter 5

  1. hata shows:
    1. Using a modelview matrix set by lookat and a projection matrix set by ortho
    2. Drawing a mesh with line strips.
  2. hat shows:
    1. Drawing the same points both as a triangle fan and as a line loop.
    2. Setting options to make the lines slightly in front of the triangles.
    3. Doc for depthfunc.
    4. Doc for polygonoffset.
  3. ortho1 and ortho2 show clipping with an interactive orthographic (parallel) projection.
  4. perspective1 and 2 show an interactive perspective projection.
  5. shadow shows a shadow.
  6. Notes about the computer programs:
    1. The point is not to teach javascript; it's just the vehicle. Before javascript, I taught C++. Javascript is easier.
    2. One point is to teach a widely used API, i.e., WebGL.
    3. Another point is to teach graphics concepts like projection and viewing, and their APIs, like lookAt.
    4. These concepts will exist in any graphics API.

8   Homework 5

is now online, due next Mon.

CG Homework 6, due Mon 2018-10-29 2359

  1. (10 pts) Write 200 words describing your proposed project. Give us an idea of what you want to do and how you propose to do it. Perhaps include a block diagram of its components and a gantt chart of when you propose finish various stages.

(Total: 10 points.)

CG Homework 5, due Mon 2018-10-22 2359

  1. (2 pts) What is the angle (in degrees) between these two vectors: (1,2,0), (1,3,2)?

  2. (2 pts) (Reverse engineering rotations) In 2D, if the point (4,2) rotates about the origin to (2,-4), what's the angle?

  3. (2 pts) Give the matrix M that has this property: for all vectors p, \(Mp = \begin{pmatrix}4\\2\\5\end{pmatrix} \times p\).

  4. (2 pts) Give the matrix M that has this property: for all vectors p, \(Mp = \left( \begin{pmatrix}2\\5\\4\end{pmatrix} \cdot p \right) \begin{pmatrix}2\\5\\4\end{pmatrix}\).

  5. (2 pts) Why can the following not possibly be a 3D Cartesian rotation matrix?

    \(\begin{pmatrix} 3& 0 &0\\1 & 0 &0\\0& 0 &1\end{pmatrix}\)

  6. (2 pts) Use any method (not involving soliciting answers on the internet) to rotate the point (6,6,9) by 120 degrees about the axis (2,2,3). Explain your method. (E.g., if you saw the answer in a vision, are your visions generally accurate?)

  7. (2 pts) Can the volume of a small cube change when its vertices are rotated? (yes or no). Why (not)?

  8. (2 pts) What is the ''event loop''?

  9. (2 pts) Why does putting all your vertices into an array and telling OpenGL about it make a big graphics program faster?

  10. (2 pts) Since the Z (aka depth) buffer looks so useful, why is it not enabled by default?

  11. (2 pts) What's the quaternion representing a rotation of 180 degrees about the axis (1,0,0)?

  12. (2 pts) Use the quaternion formulation to rotate the point (0,1,0) by 180 degrees about the axis (1,0,0).

  13. (2 pts) Use the vector formulation to rotate the point (0,1,0) by 180 degrees about the axis (1,0,0).

  14. (14) Extend your program that displays the Starship Enterprise as follows:

    1. Do the rotation in the vertex shader instead of in the javascript program.
    2. Make the color of each pixel depend on its z-value.

(Total: 40 points.)

CG Homework 9, due Mon 2018-11-19 2359

  1. (10 pts) Look at Figures 6.37 and 6.38 on pages 310 and 311 of the textbook. Write 50-100 words on why they look different.
  2. (10 pts) Look at Figure 6.39 on page 314 of the textbook. What is it about the spheres that the global model looks different from the local?
  3. (10 pts) Consider the sphere \(x^2+y^2+z^2=169\). What is the normal to the sphere at the point (-12,3,-4)? Be sure your normal is normalized.
  4. (10 pts) Consider a block of glass with index of refraction 1.6. Some light is shining straight down on it. This is called normal incidence. How much of the light reflects off the glass and how much transmits into the glass? Hint: Use Fresnel's law.
  5. (10 pts) A small light source that is twice as far away is 1/4 as bright. That is, there is an inverse square fall off for brightness. However, when modeling light in graphics, we usually don't do that. Why?

(Total: 50 points.)