Quick links

Week 1

Mon 8/27 (L1)

  1. Discuss syllabus
  2. Intro to OpenGL.
  3. Show a simple OpenGL program, box.cpp, from the code zipfile on the publisher site.
  4. Reading assignment: Guha, chapter 1.
  5. Programs I modified: box2.cpp, box3.cpp

Wed 8/29

  1. Lab to start using OpenGL.

Thu 8/30 (L2)

  1. Correction: TA Shan Zhong's email is zhongs2ATrpiDOTedu. Sorry.
  2. TA office hours. Come near the start of the time. If no one comes, they will leave. Later in the semester, when demand is greater, we may expand these. The TAs will be in the ECSE Flip Flop Lounge, JEC6037, x5397.
    If these times are not convenient, please write them to schedule a private meeting at a convenient time.
    1. Tuesday 6pm-7pm, Xiaoyang Wang
    2. Friday 4pm-5pm, Shan Zhong
  3. Hike: Dean Trinkle and I are considering a student-prof hike on Monument Mt in Mass. on Sunday Oct 21. Details later. http://www.thetrustees.org/assets/documents/places-to-visit/trailmaps/Monument-Mountain-Trail-Map.pdf
  4. Show
    1. John Cleese PC ad The broader moral of this story is that computer graphics is not theoretical math, but depends on HW. As HW gets faster, CG gets better.
    2. History of Computer Animation - P1. Note that this is one person's view; there were many other contributors.
  5. Programs:
    1. Play with box.c
  6. Program I modified: square2.cpp
  7. Lecture notes: 0830.pdf

Week 2

Linear algebra tutorial

  1. For students confused by the linear algebra questions on homework 1, the following may help:
    1. vector methods.
    2. Vector Math for 3D Computer Graphics
    3. Working with vectors, from Maths: A Student's Survival Guide
  2. Also see wikipedia.
  3. Some older graphics texts have appendices summarizing the relevant linear algebra.
  4. The scalar (dot) product is needed to do things like this:
    1. compute how a ball bounces off a wall in pong
    2. compute lighting effects (how light reflects from a surface)
  5. The cross product's applications are mostly in engineering, e.g., to compute torque and angular momentum. It also computes the area of a parallelogram.
  6. The triple product (A.BxC) computes the volume of a parallelepiped.

Thu 9/6 (L3)

  1. Note on Engineering Grounded In Reality and famous graphics alumni.
  2. Executive summary of Portability And Standards.
  3. GLUT - The OpenGL Utility Toolkit interfaces between OpenGL and your windowing system. It adds things, such as menus, mouse and keyboard interface, that were considered too platform-dependent and too far outside OpenGL's core mission to include in OpenGL. GLUT is platform independent, but quite basic. There are several alternatives, some of which are platform dependent but more powerful. You can't always have everything at once. However GLUT is the safe solution.
  4. I've assembled some important OpenGL points here" OpenGL Design tradeoffs and notes
  5. Homework 2 is out; due in one week.
  6. New programs from Guha:
    1. circle.cpp (page 46) shows lineloop and approximating a curve with a polyline.
    2. circularannuluses.cpp (p 48) introduces the depth (or Z) buffer. With it, the nearest object on each pixel is displayed. W/o it, the last object drawn into each pixel is displayed.
    3. helix.cpp (p 51): perspective and ortho projections
    4. hemisphere.cpp (p 59) shows
      1. rotation and translation
      2. perspective projection
      We'll study these in great detail later. Each new transformation concatenates onto the modelview matrix. When you plot a vertex, the current modelview matrix transforms it. The last transformation concatenated onto the modelview matrix is the first applied to the vertex.
    5. squareannulus*.cpp (p 68) show more efficient ways to plot lots of vertices.
  7. Watch part 2 of the history of graphics.
  8. Graphics display hardware, based on Guha page 15. The progress of Computer Graphics is largely the progress of hardware. We'll see more of this later. However, here's an intro.
    1. What physical principles are each type of HW based on?
      1. CRT: certain rare earth materials emit photons when hit by electrons. Explaining this is what got Einstein his Nobel (not relativity).
      2. LCD: electric field causes big asymmetric molecules to untwist so that they no longer rotate polarized light passing through them.
    2. What engineering challenges required solving?
      1. Shadow-mask CRT: electron beams travel varying distances at different angles, but don't hit the wrong phosphor even as the system gets hotter. The precision is 0.1%.
      2. Hi-performance graphics requires hi bandwidth memory.
      3. Virtual reality headsets require knowing where your head is and its angle (harder).
    3. What tech advances enabled the solutions?
      1. Raster graphics requires cheap memory.
      2. LCD panels require large arrays of transistors.
  9. Lecture notes: 0906.pdf

Week 3

Lecture and lab swaps

Because I'll be away next week at the Autocarto and GIScience conferences including the International Workshop on Modern Accelerator Technologies for GIScience in Columbus, as a speaker and program committee member:

  1. Wed Sep 12 and Sep 28 will be regular lectures, in Walker 6113.
  2. Mon Sep 17 and Thu Sep 20 will be labs, for you to ask the TAs for help. There will be no scheduled activities.

Mon 9/10 (L4)

  1. Programs that I modify in class are frequently stored here: modified-guha/
  2. New programs from Guha:
    1. squareannulus*.cpp (p 68) show more efficient ways to plot lots of vertices.
  3. Lecture notes: 0910.pdf

Wed 9/12 (L5)

This is a regular lecture.

Note: Given that 'Prediction is very difficult, especially about the future.' - Niels Bohr, lists of topics to be covered in future lectures are tentative. After each class, I will update this wiki to reflect what actually happened.

  1. Rehash squareannulus*.cpp in more detail.
  2. SIGGRAPH 2012 Technical Papers Video Preview
  3. Go through the homework questions.
  4. More OpenGL programs
    1. squareAnnulusAndTriangle, p 69: interspersed vertex and color data
    2. helixList.cpp:
      1. display list
      2. push and pop transformation
    3. multipleLists.cpp: multiple display lists
    4. cube.cpp -a new program to draw a cube with glDrawElements, modified from squareAnnulusAndTriangle.
  5. Most Realistic Computer Graphics - NVIDIA human head demo
  6. Lecture notes: 0912.pdf

Thu 9/13 (L6)

  1. SIGGRAPH 2011 Technical Papers Video Preview
  2. Programs
    1. fonts.cpp, p 72: bitmapped and stroke fonts
    2. mouse.cpp: mouse button callback
    3. mouseMotion.cpp: mouse motion callback
    4. moveSphere.cpp, p 76: non ASCII key callback
    5. menus.cpp - p 77
    6. nopush.cpp - a new program to test what happens if you pop from the matrix stack w/o pushing. The program produces no output.
    7. extrapush.cpp - a new program to test the effect of more pushes than pops. The stack did not overflow, for the number of extra pushes that I tried.
    8. cube2.cpp - a new program to draw a cube (with glDrawElements) into a display list in the setup routine. Thus, the cube is drawn only once. The drawScene routine just calls that display list. For large scenes this should be much more efficient.
  3. Lecture notes: 0913.pdf
  4. Homework 3 is out; due in one week.

Week 4

Mon 9/17

An open lab, to ask for help.

Wed 9/19

An open lab, to ask for help.

Thu 9/20

  1. An open lab, to ask for help.
  2. Homework 4 online, due Sept 27.

Week 5

Mon 9/24 (L7)

  1. Videos
    1. SIGGRAPH 2011 Computer Animation Festival Video Preview
    2. Evans and Sutherland flight simulator history
  2. Programs
    1. Review menus and cube2.
    2. lineStipple.cpp - p 77 - Set a line stipple.
    3. canvas.cpp - p 78 - Primitive drawing program.
    4. glutObjects.cpp - p 80
      1. 9 builtin glut polyhedra
      2. rendered both shaded and as wire frames,
      3. rotated by user input
      4. initial look at lighting in OpenGL; we'll see this a lot more.
      5. enabling lighting,
      6. enabling specific lights,
      7. material properties,
      8. diffuse, ambient and specular lights.
    5. clippingPlanes.cpp - p 82
      1. add more clip planes to the 6 original ones (the sides of the view volunme)
      2. enable and disable clipping planes.
      3. enabling specific clipping plane,
      4. clipping plane equation.
    6. viewports.cpp - p 86
      1. change the viewport to a different part of the window, and draw more stuff.
    7. windows.cpp - p 87
      1. multiple top level windows, each with its own callbacks.
  3. Soon we start chapter 4; so feel free to read ahead. We'll see the mathematics of transformations. Each of the major transformations (translation, rotation, scaling, projection) can be expressed as a multiplication with a 4x4 matrix.

Wed 9/26

Open lab.

Thu 9/27 (L8)

  1. Videos
    1. RealFlow Siggraph 2011 Showreel
    2. Animating Fire with Sound (SIGGRAPH 2011)
    3. Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and HDR Displays
  2. Start Chapter 4, transformations.
  3. Dreamworks coming Dec 5. More later.
  4. I talked a little about broader impact, i.e., why should the taxpayer fund this stuff? My example was research into simulating flames. When you are job hunting, you should ask yourself what benefit your potential employer would get from hiring you. When I write a proposal to a federal agency to get money, I mention how it would benefit from funding me.
  5. Lecture notes: 0927.pdf

Week 6

Mon 10/1 (L9)

  1. Videos
    1. Physics demos Show off Fog at NVIDIA GTC keynote day 1
    2. SIGGRAPH 2012 : Computer Animation Festival Trailer
    3. SIGGRAPH 2011 : Real-Time Live Highlights
  2. Transformation review
    1. Each type of common transformation (translate, rotate, scale, project) is a matrix.
    2. If applying several transformations, it is faster to first multiply the matrices, then just multiply all the points by that one matrix.
    3. Most OpenGl transformation routines modify one of two current transformation matrices: the modelview or the projection.
    4. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
    5. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines. We'll cover this later.
    6. The last transformation catenated onto the current matrix is the first transformation applied to the object.
    7. OpenGL combines the two matrices, so the modelview matrix is applied first to the object.
  3. Rotations: My note on 3D rotation
    1. all rigid transformations in 3D that don't move the origin have a line of fixed points, i.e., an axis, that they rotate around.
    2. deriving the vector formula for a rotation given the axis and angle
    3. computing the matrix from a rotation axis and angle
    4. testing whether a matrix is a rotation
    5. if it is, then finding the axis and angle
  4. Lecture notes: 1001.pdf

Wed 10/3 (L10) Regular lecture

  1. Videos
    1. TED: Paul Debevec animates a photo-real digital face
    2. TED: Ed Ulbrich: How Benjamin Button got his face
  2. Programs
    1. rotatingHelix{1,2,3}.cpp
  3. Lecture notes: 1003.pdf

Thu 10/4 (L11)

  1. Videos
    1. TED: David Bolinsky animates a cell
    2. SIGGRAPH 2010 : Technical Papers Trailer
  2. Programs
    1. speedtest1.cpp, speedtest2.cpp, speedtest3.cpp Programs I wrote to test speed of using display list, gldrawelements, and explitic for loop.
    2. composeTransformations.cpp
    3. box.cpp with blocks 10 and 13 inserted (separately, not both at once)
    4. ballAndTorus.cpp, p 120
    5. ballAndTorusWithFriction.cpp, p 124
    6. clown head, p 125
    7. flowerintPoint.cpp, p 129
  3. Lecture notes: 1004.pdf

Week 7

Tues 10/9 (L12)

  1. Video: SIGGRAPH 2012 : Emerging Technologies
  2. Euler and angles and Gimbal lock
    1. http://www.youtube.com/watch?v=rrUCBOlJdt4&feature=related Gimble Lock - Explained.
      One problem with Euler angles is that multiple sets of Euler angles can degenerate to the same orientation. Conversely, making a small rotation from certain sets of Euler angles can require a jump in those angles. This is not just a math phenomenon; real gyroscopes experience it.
    2. http://en.wikipedia.org/wiki/Gimbal_lock
    3. What is Gimbal Lock and why does it occur? - an animator's view.
  3. Programs
    1. boxWithLookAt transformation, p 133 - shows gluLookAt
    2. spaceTravel1.cpp, p 153 - changes camera viewpoint as spacecraft moves
    3. animateMan1.cpp etc, p 157 - interactively construct, then replay, an animation sequence with actor's joints' positions changing.
    4. ballAndTorusShadowed.cpp, p 161 - simulate a shadow
    5. selection.cpp, p 163
  4. Picking
    The user selects an object on the display with the mouse. How can we tell which object was selected? This is a little tricky.
    E.g., The selected object may be one instance of a display list that was drawn several times, called from another display list that itself was drawn several times. We want to know, sort of, the whole call stack.
  5. Reading: Guha p 161-169.
  6. The various methods are messy; only method that you need to know is setting the object id into the color buffer.
  7. Lecture notes: 1009.pdf

Wed 10/10 Review for midterm

  1. I'll be here to answer questions and review old exams.
  2. Lecture notes: 1010.pdf

Thurs 10/11 Midterm exam

  1. Open book, open notes
  2. Questions based on the programs and material covered in class, up through lecture 11.
  3. Midterm)F2012, Solution.

Week 8

Mon 10/15 (L13)

  1. Quaternions. This is an alternative method to rotate in 3D. Its advantages are:
    1. It starts from the intuitive axis-angle API.
    2. Animating a large rotation in small steps (by varying the angle slowly) is easy. In contrast, stepping the 3 Euler angles does not work well, and there's no obvious way to gradually apply a {$3\times3$} rotation matrix, {$M$}. (You could compute {$M^{1/100}$} and apply it 100 times, but that computation is messy.)
    3. When combining multiple rotations, the axis and angle of the combo is easy to find.
    4. Having only 4 parameters to represent the 3 degrees of freedom of a 3D rotation is the right number. Using only 3 parameters, as Euler angles do, causes gimbal lock. That is, you cannot always represent a smooth rotation by smooth changes of the 3 parameters. OTOH, using 9 parameters, as with a matrix, gives too much opportunity for roundoff errors causing the matrix not to be exactly a rotation. (You can snap the matrix back to a rotation matrix, but that's messy.)
    5. Note on 3D Interpolation.
    6. Guha p 253-268.
  2. Lecture notes: 1015.pdf

Wed 10/17

Lab to get back your graded midterms, and to query the TAs about any grading issues. Do it now; don't wait.

Thu 10/18 (L14)

Announcements:

  1. Project proposal not due until Mon, to give you time to ask questions in class today.
  2. Homework 6 online, due next Thurs.
  3. All grades should be on RPILMS, including an estimated grade for the 1st half of the course, computed at 1/2 for the homeworks (all weighted equally) and 1/2 for the midterm exam.
    Comment: the class is doing excellent!
  4. Jeff Trinkle and I are planning a faculty-student hike on Monument Mt near Great Barrington for this Sunday. It's relatively easy. Contact him, copy to me, if you're interested.

New material:

  1. View normalization or projection normalization
    1. We want to view the object with our desired perspective projection.
    2. To do this, we transform the object into another object that looks like an amusement park fun house (all the angles and lengths are distorted).
    3. However, the default parallel projection of this normalized object gives exactly the same result as our desired perspective projection of the original object.
    4. Therefore, we can always clip against a 2x2x2 cube, and project thus: (x,y,z)->(x,y,0) etc.
    5. Guha p 643-655.
  2. Homogeneous coordinates
    This is today's big idea
    HomogeneousCoords
  3. Lecture notes: 1018.pdf

Week 9

Mon 10/22 (L15)

  1. Homogeneous coordinates, ctd.
    1. Parallel lines intersect at an infinite point
    2. There is a line through any two distinct points
    3. Projection matrices in more details,
  2. Lighting

This is the next topic. To prepare, read Guha, Section VI, Chapter 11, pages 403-524. (You don't have to read all of it for Mon.)

Things I added to lecture, which are not in the book:

  1. Tetrachromacy
    Some women have 2 slightly different types of green cones in their retina. They see 4 primary colors.
  2. Metamers
    Different colors (either emitted or reflected) with quite different spectral distributions can appear perceptually identical. With reflected surfaces, this depends on what light is illuminating them. Two surfaces might appear identical under noon sunlight but quite different under incandescent lighting.
  3. CIE chromaticity diagram
    This maps spectral colors into a human perceptual coordinate system. Use it to determine what one color a mixture of colors will appear to be.
  4. Next topic: chapter 10
  5. Lecture notes: 1022.pdf

Wed 10/24

Lab to talk to the TAs.

Thu 10/25 (L16)

  1. Project proposal submission
    1. Please submit all proposals via RPILMS even if you emailed me.
    2. If you have a team, could one person submit the proposal and the others submit just a statement of the team leader.
    That makes things uniform and gives me something to attach a grade to.
  2. View normalization or projection normalization
    1. We want to view the object with our desired perspective projection.
    2. To do this, we can transform the object into another object that looks like an amusement park fun house (all the angles and lengths are distorted).
    3. However, the default parallel projection of this normalized object gives exactly the same result as our desired perspective projection of the original object.
    4. Therefore, we can always clip against a 2x2x2 cube, and project thus: (x,y,z)->(x,y,0) etc.
  3. OpenGL modelview vs projection matrices.
    1. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
    2. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines.
  4. My note on NTSC And Other TV Formats.
  5. Debugging OpenGL: The OpenGL FAQ and Troubleshooting Guide is old but can be useful.
  6. Computer graphics in the real world (enrichment only)
    1. Forma Urbis Romae - reconstruction of a street map of 211AD Rome from 1186 pieces like this one:
  7. Another OpenGL tutorial
    The Practical Physicist's OpenGL tutorial Edward S. Boyden
  8. Steve Baker's notes on some graphics topics:
    1. GL_MODELVIEW vs GL_PROJECTION
    2. basic OpenGL lighting
    3. Euler angles are evil
    4. Smooth Shading 'Gotcha's in OpenGL
  9. Phong lighting model: The total light at a pixel is the sum of
    1. Incoming ambient light times ambient reflectivity of the material at the pixel,
    2. Incoming diffuse light times diffuse reflectivity times a factor for the light source being low on the horizon,
    3. Incoming specular light times specular reflectivity times a factor for the eye not being aligned to the reflection vector, with an exponent for the material shininess,
    4. Light emitted by the material.
    See page 439.
  10. That is not intended to be completely physical, but to give the programmer lots of parameters to tweak.
  11. OpenGL has several possible levels of shading. Pick one of the following choices. Going down the list makes the shading better but costlier.
    1. Shade the whole polygon to be the color that you specified for one of the vertices.
    2. Bilinearly shade the polygon, triangle by triangle, from the colors you specified for its vertices.
    3. Use the Phong lighting model to compute the color of each vertex from that vertex's normal. Bilinearly interpolate that color over the polygon. That is called Gouraud shading.
    4. Bilinearly interpolate a surface normal at each pixel from normals that you specified at each vertex. Then normalize the length of each interpolated normal vector. Evaluate the Phong lighting model at each pixel from the interpolated normal. That is called Phong shading.
  12. Computing surface normals. See page 442ff.
    1. For a curved surface, the normal vector at a point on the surface is the cross product of two tangent vectors at that point. They must not be parallel to each other.
    2. If it's a parametric surface, partial derivatives are tangent vectors.
    3. A mesh is a common way to approximate a complicated surface.
    4. For a mesh of flat (planar) pieces (facets):
      1. Find the normal to each facet.
      2. Average the normals of the facets around each vertex to get a normal vector at each vertex.
      3. Apply Phong (or Gouraud) shading from those vertex normals.
  13. Homework 7 out, due next Thurs.
  14. Lecture notes: 1025.pdf

Week 10

Mon 10/29

No class; RPI closed

Wed 10/31

lab to ask questions of the TAs

Thurs 11/1 (L17)

Possible spring courses

For students who want lots of graphics. These courses won't overlap.

  1. CSCI-4530/6530 ADVANCED COMPUTER GRAPHICS, T F 2:00 3:50PM Cutler
  2. ARTS-4020-01 ADV DIGITAL 3D PROJECTS TF 12-1:50 Lawson

Lighting in OpenGL

  1. http://www.glprogramming.com/red/chapter05.html is a good description.
  2. Also Guha, ...
  3. Programs:
    1. sphereInBox1.cpp
    2. sphereInBox1a.cpp, sphereInBox1b.cpp show that the light position uses the current ModelView matrix.
    3. lightAndMaterial1.cpp
      1. interactively change material properties
      2. new: attenuating distant light
    4. lightAndMaterial2.cpp
      1. interactively change light properties
      2. moving light
    5. spotlight.cpp
      1. new: spotlights, colormaterial mode
    6. litTriangle.cpp - 2 sided lighting
    7. checkeredFloor.cpp - flat shading to draw checkered floor.
  4. Lecture notes: 1101.pdf

Week 11

Mon 11/5 (L18)

  1. Xiaoyang Wang talked about Chapter 12, textures.
    1. Motivation of using textures: Adding per-pixel surface details without raising the geometric complexity of a scene.
    2. Textural mapping: The mapping of any image into multidimensional space.
    3. Often used mapping shapes: planar mapping, cylindrical mapping, spherical mapping
    4. About map entity: material, bump mapping, light mapping
    5. Functions in OpenGL: glTexImage2D, glTexCoord2f, glTexParameteri
  2. Lecture notes: 1105.pptx

Thurs 11/8 (L19)

  1. Shan Zhong talked about Chapter 13 and the start of 14.
  2. Lecture notes: 1108.pptx

Week 12

Mon 11/12 (L20)

News from outside world: ivan-sutherland-wins-kyoto-prize. Sjetchpad demo.

Chapter 10. Big idea: curves. Big questions:

  1. What math to use?
  2. How should the designer design a curve?
  3. OpenGL implementation.
  4. Reading: bezier.pdf
    Coming up: student presentations are Dec 3, 5 and 6. I was going also to use Dec 3, but Dreamworks is coming then. Unfortunately, Dreamworks canceled. We'll have 1/2 1/3 the presentations on each day, running for 2 hours. Each team will get the same time, regardless of team size. I'll bring a signup sheet to classes for those who prefer one of those days, up to the capacity of each day. First come, first served.
  5. Partial summary:
    1. To represent curves, use parametric (not explicit or implicit) equations.
    2. Use connected strings or segments of low-degree curves, not one hi-degree curve.
    3. If the adjacent segments match tangents and curvatures at their common joint, then the joint is invisible.
    4. That requires at least cubic equations.
    5. Higher degree equations are rarely used because they have bad properties such as:
      1. less local control,
      2. numerical instability (small changes in coefficients cause large changes in the curve),
      3. roundoff error.
    6. One 2D cartesian parametric cubic curve segment has 8 d.f.
      {$ x(t) = \sum_{i=0}^3 a_i t^i$}, {$ y(t) = \sum_{i=0}^3 b_i t^i$}, for {$0\le t\le1$}.
    7. Requiring the graphic designer to enter those coefficients would be unpopular, so other APIs are common.
    8. Most common is the Bezier formulation, where the segment is specified by 4 control points, which also total 8 d.f.: P0, P1, P2, and P3.
    9. The generated curve starts at P0, goes near P1 and P2, and ends at P3.
    10. The curve stays inside the control polygon, the convex hull of the control points. A flatter control polygon means a flatter curve.
    11. A choice not taken would be to have the generated curve also go thru P2 and P3. That's called a Catmull-Rom-Oberhauser curve. However that would force the curve to go outside the control polygon by a nonintuitive amount. That is considered undesirable.
    12. Instead of 4 control points, a parametric cubic curve can also be specified by a starting point and tangent, and an ending point and tangent. That also has 8 d.f. It's called a Hermite curve.
    13. The three methods (polynomial, Bezier, Hermite) are easily interconvertible.
    14. Remember that we're using connected strings or segments of cubic curves, and if the adjacent segments match tangents and curvatures at their common joint, then the joint is invisible.
    15. That reduces each successive segment from 8 d.f. down to 2 d.f.
    16. This is called a B-spline.
    17. From a sequence of control points we generate a B-spline curve that is piecewise cubic and goes near, but probably not thru, any control point (except perhaps the ends).
    18. Moving one control point moves the adjacent few spline pieces. That is called local control. Designers like it.
    19. One spline segment can be replaced by two spline segments that, together, exactly draw the same curve. However they, together, have more control points for the graphic designer to move individually. So now the designer can edit smaller pieces of the total spline.
    20. Extending this from 2D to 3D curves is obvious.
    21. Extending to homogeneous coordinates is obvious. Increasing a control point's weight attracts the nearby part of the spline. This is called a rational spline.
    22. Making two control points coincide means that the curvature will not be continuous at the adjacent joint.
      Making three control points coincide means that the tangent will not be continuous at the adjacent joint.
      Making four control points coincide means that the curve will not be continuous at the adjacent joint.
      Doing this is called making the curve (actually the knot sequence) non-uniform. (The knots are the values of the parameter for the joints.)
    23. Putting all this together gives a non-uniform rational B-spline, or a NURBS.
    24. A B-spline surface is a grid of patches, each a bi-cubic parametric polynomial.
    25. Each patch is controlled by a 4x4 grid of control points.
    26. When adjacent patches match tangents and curvatures, the joint edge is invisible.
    27. The surface math is an obvious extension of the curve math.
      1. {$ x(u,v) = \sum_{i=0}^3\sum_{j=0}^3 a_{ij} u^i v^j $}
      2. {$y, z$} are similar.
      3. One patch has 48 d.f., although most of those are used to establish continuity with adjacent patches.
  6. Lecture notes: 1112.pdf

Thurs 11/15 (L21)

  1. Chapters 10, 15, and 16.
  2. My extra enrichment info on splines: Splines.
  3. Guha's treatment of evaluators etc. is weak. I recommend the googling for better descriptions. This one is good:
    OpenGL Programming Guide - Chapter 12 Evaluators and NURBS
  4. Different books define B-splines slightly differently, especially with the subscripts and end conditions.
  5. Other topics from chapter 10:
    1. swept volumes
    2. regular polyhedron
    3. quadrics
  6. Programs covered:
    1. bezierCurves, p 390
    2. bezierCurveWithEvalMesh, p 391
    3. bezierCurveWithTangent, p 392
    4. bezierSurface, p 392
    5. bezierCanoe, p 394
    6. torpedo, p 395
    7. deCasteljau3, p 565
    8. sweepBezierSurface, p 578
    9. bSplines, 594
  7. Lecture notes: 1115.pdf

Week 13

Mon 11/19 (L22)

  1. A little more on splines:
    1. cubicSplineCurve2
      1. This shows how to do a NURBS curve in OpenGL. When you drag a control point, only the closest segments of the curve move.
      2. The starting and ending knots are each repeated four times. That makes the spline go through the ending control points.
    2. bicubicSplineSurfaceLitTextured
      1. This shows a NURBS surface.
      2. When you rotate the surface, the light (the small black square) does not rotate. Therefore the highlights change.
      3. Lighting, texture, and the depth test are both enabled.
      4. Surface normals are automatically computed from the surface, and then used by the lighting.
      5. The texture modulates the computed lighting.
      6. Both surface coordinates (vertices) and texture coordinates are NURBS.
    3. trimmedBicubicSplineSurface
      The shows trimming a NURBS surface to cut out holes. The trimming curve can also be a B-spline. It is defined in the parameter space of the surface. If the surface's control points are moved, the trimming curve moves along with the surface.
  2. The OpenGL SuperBible has a lot of code. Each version has two parallel directory trees - for the source and the executables. Don't move files around. You need to compile and run from the executable tree.
  3. The fixed graphics pipeline:
    1. Process vertices, e.g. to apply transformations and compute normals.
    2. Rasterize, i.e., interpolate data from vertices to pixels (fragments)
    3. Process fragments, e.g., to compute Phong shading.
  4. Shaders: Customize steps (a) and (c) above.;
  5. GPU programming - vertex and fragment shaders etc.
    1. Guha's description in Chapter 20 is excellent.
    2. GLSL/RedSquare
    3. GLSL/MultiColoredSquare2
    4. GLSL/InterpolateTextures
    5. GLSL/BumpMappingPerVertexLighting
    6. GLSL/BumpMappingPerPixelLighting
  6. Homework 9 out, due Thurs Nov 29.
  7. Thanksgiving trivia questions:
    1. What language did Samoset, the first Indian to greet the Pilgrims, use?
      Answer: http://www.holidays.net/thanksgiving/pilgrims.htm
    2. How many European countries had Squanto, the 2nd Indian to greet the Pilgrims, visited?
  8. Lecture notes: 1119.pdf

Thurs 11/22

No lecture. Optional reading material:

  1. SIGGRAPH 91 and 92 preliminary programs. This is a bowdlerized version, from whence I have removed items that might give offense.
  2. Raytracing jello brand gelatin

Week 14

Mon 11/26 (L23)

  1. getinfo.cpp shows how to retrieve OpenGL state info, such as version.
  2. More shader examples. These are from the 4th edition of the OpenGL Superbible. The local copy of the code and book is here. (The 5th edition has some negative reviews because of how they updated their code for the latest OpenGL version.)
    The code is arranged in two parallel trees: src has the C++ source code, while projects has the Makefile, executable, and shader files. Some interesting examples are:
    1. chapt16/vertexblend
    2. chapt16/vertexshaders
    3. chapt17/bumpmap
    4. chapt17/fragmentshaders
    5. chapt17/imageproc
    6. chapt17/lighting
    7. chapt17/proctex
    Those chapters of the SuperBible also have good descriptions.
  3. Next week is for student fast forward presentations. (This technique is popular at conferences.)
    1. Each team is to prepare a 5-minute timed MS Powerpoint presentation on your project.
    2. I will run the presentations continuously w/o a break.
    3. Your five minutes will include the time it takes you to get to the front of the class.
    4. The whole presentation must be in one ppt or pptx file.
    5. You may talk while your presentation is running, or remain silent (but then it should have sound).
    6. You must not talk into the next person's presentation.
    7. Mail me (at mail @ w r f r a n k l i n . o r g) your presentation by 9am on the day of your presentation. If your file is over 2MB, then upload it somewhere and send a link. (Also consider using a better codec.)
  4. Regardless of your presentation time, your project is due on Dec 6.
  5. Lecture notes: 1126.pdf

Thurs 11/29 (L24)

  1. 10 minutes for filling out survey
  2. Aliasing and anti-
    1. The underlying image intensity, as a function of x, is a signal, f(x).
    2. When the objects are small, say when they are far away, f(x) is changing fast.
    3. To display the image, the system evaluates f(x) at each pixel. That is, f(x) is sampled at x=0,1,2,3,...
    4. If f(x), when Fourier transformed, has frequencies higher than 1/2 (cycle per pixel), then that sampling is too coarse to capture the signal. See the Nyquist sampling theorem.
    5. When this hi-freq signal is sampled at too low a frequency, then the result computed for the frame buffer will have visual problems.
    6. It's not just that you won't see the hi frequencies. That's obvious.
    7. Worse, you will see fake low frequency signals that were never in the original scene. They are called aliases of the hi-freq signals.
    8. These artifacts may jump out at you, because of the Mach band effect.
    9. Aliasing can even cause (in NTSC) rapid intensity changes to cause fake colors and vv.
    10. Aliasing can occur with time signals, like a movie of a spoked wagon wheel.
    11. This is like a strobe effect.
    12. The solution is to filter out the hi frequencies before sampling, or sample with a convolution filter instead of sampling at a point. That's called anti-aliasing.
    13. OpenGl solutions:
      1. Mipmaps.
      2. Compute scene on a higher-resolution frame buffer and average down.
      3. Consider pixels to be squares not points. Compute the fraction of each pixel covered by each object, like a line. Lines have to have finite width.
    14. Refs:
      1. http://en.wikipedia.org/wiki/Aliasing
      2. http://en.wikipedia.org/wiki/Clear_Type
      3. http://en.wikipedia.org/wiki/Wagon-wheel_effect
      4. http://en.wikipedia.org/wiki/Spatial_anti-aliasing (The H Freeman referenced worked at RPI for 10 years).
      5. http://en.wikipedia.org/wiki/Mipmap
      6. http://en.wikipedia.org/wiki/Jaggies
  3. Chapter 19: ray tracing, radiosity. Other refs:
    1. http://en.wikipedia.org/wiki/Ray_casting
    2. http://en.wikipedia.org/wiki/Ray_tracing_%28graphics%29
    3. http://en.wikipedia.org/wiki/Rendering_%28computer_graphics%29
    4. http://en.wikipedia.org/wiki/Radiosity_%283D_computer_graphics%29
  4. Visibility methods:
    1. Painters:
      1. The painter's algorithm is tricky when faces are close in Z.
      2. Sorting the faces is hard and maybe impossible. Then you must split some faces.
      3. However sometimes some objects are always in front of some other objects. Then you can render the background before the foreground.
    2. Z-buffer:
      1. Subpixel objects randomly appear and disappear (aliasing).
      2. Artifacts occur when objects are closer than their Z-extent across one pixel.
      3. This happens on the edge where two faces meet.
    3. BSP tree:
      1. In 3D, many faces must be split to build the tree.
    4. The scanline algorithm can feed data straight to the video D/A. That was popular decades ago before frame buffers existed. It is popular again when frame buffers are the slowest part of the pipeline.
    5. A real implementation, with a moving foreground and fixed background, might combine techniques.
    6. References: wikipedia.
  5. Lecture notes: 1129.pdf

Week 15

Mon 12/3

Thrun video

There are no student presentations today. (Everyone who was listed either has dropped or is teamed with some who is presenting later.)

So I will show a video of Sebastian Thrun's keynote talk at this year's NVidia technical conference. (This is a baseline of a good term project, given that Thrun was hampered by being at Stanford not RPI.) (Local cache).

It is also a marvelous example of a successful engineering project. Many different parts all have to work to make the total project succeed. They include laser rangefinding, image recognition, a road database accurate to the specific lane, and GPS navigation. This is also a model for government - university - industry interaction.

DARPA (The Defense Advanced Research Projects Agency) started this concept with a contest paying several $million in prizes. (DARPA started connecting computers in different cities with phone lines in 1968. This was the ARPAnet. They funded computer graphics in the 1970s and some early steps in virtual reality in the 1980s.)

In the 1st contest, the best vehicle failed after about 10 miles. Five vehicles completed the 130 mile course in the 2nd contest. The winning project leader was Sebastian Thrun, a Stanford CS prof. He quit and moved to Google, who has now been funding this for several years.

Here is the talk abstract:

What really causes accidents and congestion on our roadways? How close are we to fully autonomous cars? In his keynote address, Stanford Professor and Google Distinguished Engineer, Dr. Sebastian Thrun, will show how his two autonomous vehicles, Stanley (DARPA Grand Challenge winner), and Junior (2nd Place in the DARPA Urban Challenge) demonstrate how close yet how far away we are to fully autonomous cars. Using computer vision combined with lasers, radars, GPS sensors, gyros, accelerometers, and wheel velocity, the vehicle control systems are able to perceive and plan the routes to safely navigate Stanley and Junior through the courses. However, these closed courses are a far cry from everyday driving. Find out what the team will do next to get one step closer to the holy grail of computer vision, and a huge leap forward toward the concept of fully autonomous vehicles.

Finally, Dr Tony Tether, Director of DARPA when this happened, is an RPI BS EE grad.

Term Project Grading

Fast forward presentation
project clearly described10
good use of video or graphics10
neat and professional10
Project itself
Graphics programming with good coding style10
Use of interactivity or 3D10
A nontrivial amount of it works (and that is shown)10
Unusually creative10
Writeup
describes key design decisions10
good examples10
neat and professional10
Total  100

Notes:

  1. In addition to the above list, rules violations such as late submission of the powerpoint file, or a project that is seriously off-topic will have an effect.
  2. A 10-minute demonstration to Shan or Xiaoyang is optional. If you do, they will give me a modifier of up to 10 points either way. I.e., a good demo will help, a bad one hurt.

Wed 12/5 Student term project presentations

  1. Brundige & Matiz
  2. Chen & Felizardo & Keller
  3. Devik
  4. Hanov
  5. Johnsen & Kaplan
  6. McCormick & Whitworth
  7. Shippee & Towns & Zondler
  8. Westrich
  9. Williams

Thurs 12/6 Student term project presentations

  1. Ahier & McKenzie
  2. Alexander & Lsrsen
  3. Benedetti
  4. Brenner & DeBartolomeo
  5. Brundige & Matiz
  6. Chan
  7. de Souza
  8. Devik
  9. Di Pietro
  10. Eastman & Grube
  11. Gruar
  12. Knobloch
  13. Minto & Truhlar
  14. Reome & Wang & Zhu
  15. Stauffer

Note: Submit your term project on LMS. Thanks.

Week 16

Mon-Thurs 12/10-13 TA office hours

Use this time if you wish to demo your project.

  1. Monday: 10:00AM - 12:00 PM Shan
  2. Tuesday: 10:00AM - 12:00 PM Xiaoyang
  3. Wednesday: 10:00AM - 12:00 PM Shan
  4. Thursday: 10:00AM - 12:00 PM Xiaoyang

Location: JEC6037, the flipflop lounge.

Tues 12/11 Review

4-5:30 pm. Walker 6113. WRF.

  1. Review notes: 1211.pdf

Thu 12/13 6:30-9:30 pm

Final exam. Solution.

Course grade

  1. I uploaded things to LMS on 12/16 and will put them in SIS tomorrow.
  2. TOTAL is your total grade, computed as follows:
    1. Homeworks 25%, each scaled to 10, the lowest dropped.
    2. Midterm exam 25%
    3. Term project 25%. Of that the writeup and optional demo was 70% and presentation 30%.
    4. Final exam 25%.
    5. 1% added for catching each serious error that I made. Only a few students did this.
  3. LETTER: I rounded TOTAL before computing LETTER. So, an A was 94.5 and up.
  4. RANK: Your rank in the class from 1 to 40.

I hope that you enjoyed learning Computer Graphics. If you'd like to ask questions or talk, I'm available in the future.