CG Lecture 27, Mon 2016-11-14

  1. ECSE-4740 Applied Parallel Computing for Engineers, my spring parallel computing course, is an intro course that does not require any other parallel computing course as a prerequisite. As stated on the course web site, the prerequisite is ECSE-2660 CANOS or equivalent, basic C++. The course will use Linux, which students will be expected to know or learn on their own.

  2. RPI article about a completed project that I was part of:

    A Game-Changing Approach: Using the X-Box Kinect as a Sensor to Conduct Centrifuge Research. Team of Rensselaer Researchers Develop New Visualization Method to Evaluate Erosion Quantity and Pattern, November 8, 2016, by Jessica Otitigbe.

  3. Another RPI article, on my student Viana Gomes de Magalhães: The Winning Algorithm, Oct 17, 2016, by Mary Martialay.

  4. Note on updating web pages:

    1. You may change a web page after it has been displayed. Editing elements and even adding or deleting them is allowed.
    2. One of the pick programs changes the displayed text in a div to show what you picked.
    3. Mathjax, the package I use to render LaTeX math in a web page, edits the displayed page to replace the math source with the rendered math.
    4. A standard way to make things appear and disappear is to include at the start all the elements that will be wanted later. Then make them visible or invisible. The tool on my homepage that makes sections expand and collapse does that.
    5. Every time something is changed on the page, the locations of everything on the page have to be recomputed. With a slow web browser, like old versions of IE, you can watch this.
  5. 12_2 Hierarchical Modeling 2.

  6. 12_3 Graphical Objects and Scene Graphs.

  7. 12_4 Graphical Objects and Scene Graphs 2.

  8. 12_5 Rendering overview.

    At this point we've learned enough WebGL. The course now switches to learn the fundamental graphics algorithms used in the rasterizer stage of the pipeline.

  9. 13_1 Clipping.

CG Lecture 26, Thu 2016-11-10

  1. 11_3 Agent based models.

  2. Where I was last week: 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2016)

    1. I gave the keynote talk at the Third ACM SIGSPATIAL 2016 PhD Symposium.
    2. The team of Salles Magalhaes, Wenli Li, Marcus Andrade (Federal University of Vicosa, Brazil) and me came 2nd in the GISCUP programming contest. Here are our talk and paper.
  3. I've mentioned DARPA several times; they funded research into computer graphics starting in the 1970s. So here's some enrichment material on DARPA and autonomous vehicles. This shows why the US is the best in the world at R&D.

    Autonomous vehicles make a marvelous example of a successful engineering project. Many different parts all have to work to make the total project succeed. They include laser rangefinding, image recognition, a road database accurate to the specific lane, and GPS navigation. This is also a model for government - university - industry interaction.

    DARPA (The Defense Advanced Research Projects Agency) started this concept with a contest paying several $million in prizes. (DARPA started connecting computers in different cities with phone lines in 1968. This was the ARPAnet. They funded computer graphics in the 1970s and some early steps in virtual reality in the 1980s.)

    In the 1st contest, the best vehicle failed after about 10 miles. Five vehicles completed the 130 mile course in the 2nd contest. The winning project leader was Sebastian Thrun, a Stanford CS prof. He quit and moved to Google, who has now been funding this for several years.

    Finally, Dr Tony Tether, Director of DARPA when this happened, is an RPI BS EE grad. (He went to Stanford for his PhD).

  4. More programs from Chapter 7.

    I'm not covering all the gruesome details of these programs, but just hitting highlights.

    1. particleDiffusion: buffer ping ponging of 50 particles initially placed randomly and then moving randomly with their previous positions diffused as a texture.

      It renders to a texture instead of to a color buffer. Then it uses the texture.

      There are better descriptions on the web. Search for 'webgl render texture'. I'll work with the textbook code. Jumping back and forth is confusing, partly because they might use different utility files.

      There are often better descriptions on the web than in the textbook. I've considered running a course only with web material. What do you think?

      https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API has good documentation.

    2. pickcube etc. This prints the name of the name that you pick.

      1. This draws a label for each face into a separate buffer.
      2. Then it indexes into it with the coordinates of the mouse.
  5. Stereo viewing. There are a number of techniques for stereo viewing of movies and videos, dating back to the 19th century.

    1. Make the left image red and the right one blue. Use red-blue glasses.

      This decreases the color experience, but the glasses are cheap.

    2. Polarize the left and right images differently. Movie theatres do this.

      The glasses are cheap. There is full color. The projection process is more complicated (= expensive).

    3. The glasses have shutters that block and clear the left and right eyes in turn. In sync, the TV or projector shows the left or right image.

      The projector is cheaper. The glasses are more expensive.

    4. Use a display with a parallax barrier.

      Only one person can view at a time. No glasses are needed.

    5. Use the fact that your eyes see bright objects faster than they see dim objects. The glasses have a simple grey filter over one eye. Beer companies have used this for commercials.

      No special display is needed. This works only with moving objects.

  6. 12_1 Hierarchical Modeling 1.

CG Lecture 25, Wed 2016-11-09

  1. Tomorrow, Prof Cutler will tell us about her spring course. CSCI-4530-01 Advanced Computer Graphics.

  2. Aliasing and anti-

    1. The underlying image intensity, as a function of x, is a signal, f(x).
    2. When the objects are small, say when they are far away, f(x) is changing fast.
    3. To display the image, the system evaluates f(x) at each pixel. That is, f(x) is sampled at x=0,1,2,3,...
    4. If f(x), when Fourier transformed, has frequencies higher than 1/2 (cycle per pixel), then that sampling is too coarse to capture the signal. See the Nyquist sampling theorem.
    5. When this hi-freq signal is sampled at too low a frequency, then the result computed for the frame buffer will have visual problems.
    6. It's not just that you won't see the hi frequencies. That's obvious.
    7. Worse, you will see fake low frequency signals that were never in the original scene. They are called '''aliases''' of the hi-freq signals.
    8. These artifacts may jump out at you, because of the Mach band effect.
    9. Aliasing can even cause (in NTSC) rapid intensity changes to cause fake colors and vv.
    10. Aliasing can occur with time signals, like a movie of a spoked wagon wheel.
    11. This is like a strobe effect.
    12. The solution is to filter out the hi frequencies before sampling, or sample with a convolution filter instead of sampling at a point. That's called '''anti-aliasing'''.
    13. OpenGl solutions:
      1. Mipmaps.
      2. Compute scene on a higher-resolution frame buffer and average down.
      3. Consider pixels to be squares not points. Compute the fraction of each pixel covered by each object, like a line. Lines have to have finite width.
    14. Refs:
      1. http://en.wikipedia.org/wiki/Aliasing
      2. http://en.wikipedia.org/wiki/Clear_Type
      3. http://en.wikipedia.org/wiki/Wagon-wheel_effect
      4. http://en.wikipedia.org/wiki/Spatial_anti-aliasing (The H Freeman referenced worked at RPI for 10 years).
      5. http://en.wikipedia.org/wiki/Mipmap
      6. http://en.wikipedia.org/wiki/Jaggies
  3. Videos - military applications of graphics

    US Military's Futuristic Augmented Reality Battlefield - Augmented Immersive Team Trainer (AITT)

    Daqri's Smart Helmet Hands On

    HoloLens Review: Microsoft's Version of Augmented Reality

    US Military's Futuristic Augmented Reality Battlefield - Augmented Immersive Team Trainer (AITT)

    '''Modeling and simulation''' is a standard term.

  4. 10_5 Rendering the Mandelbrot Set.

    Shows the power of GPU programming.

    The program is in Chapter 10.

  5. 11_1 Framebuffer objects.

  6. 11_2 Render to texture.

  7. 11_3 Agent based models.

  8. If my order here looks a little chaotic, it's because the slides don't exactly align with the programs.

  9. More programs from Chapter 7.

    1. Several of the programs don't display on my laptop, so I won't show them. They include bumpmap and render.

    2. Cubet: enabling and disabling the depth buffer and drawing translucent objects.

    3. particleDiffusion: buffer ping ponging of 50 particles initially placed randomly and then moving randomly with their previous positions diffused as a texture.

      It renders to a texture instead of to a color buffer. Then it uses the texture.

      There are better descriptions on the web. Search for 'webgl render texture'. I'll work with the textbook code. Jumping back and forth is confusing, partly because they might use different utility files.

      There are often better descriptions on the web than in the textbook. I've considered running a course only with web material. What do you think?

      https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API has good documentation.

CG Lecture 24, Mon 2016-11-07

  1. Reminder of the Dreamworks visit:

    Monday, Nov 7
    8pm-9pm
    Sage 3303
    Company Overview
    Nicole Dial, University Relations
    
    Tuesday, Nov 8
    10a-11a
    PDI Studio @ Sage 2211
    Technical Direction at DWA
    Kathryn Skorpil, Lead Technical Director
    
  2. If you're looking for masters projects or independent study courses for the spring and next year, then I might be available. Let's talk.

    I would expect you to do all the work, including teaching yourself, then attend a weekly or semiweekly meeting and write me reports on a blog. You would work yourself as hard as if you were in a regular course run by Prof. xxxxxx.

  3. ECSE-4740 Engineering Parallel Computing is available for students wanting access to state-of-the-art parallel computing facilities. You will learn things useful in your work on other topics.

    My parallel computer (dual 14-core Intel Xeon, Intel Xeon Phi, Nvidia GTX 1080, 256GB memory) is also available for any RPI-related work.

  4. Sikorsky is hiring. Dan Flickinger, bdbabnbibeblb.bmb.bfblbibcbkbibnbgbebrb@blbmbcbob.bcbobmb (remove all the b's), former post-doc of Jeff Trinkle, writes:

    I'm working with the perception team at Sikorsky, part of the autonomy team. We will soon have openings in our group. I currently have a proof-of-concept system together that processes point cloud streams into a polygonal world model, with the intentions of rapidly growing it into a high performance aircraft perception system capable of supporting landing zone selection, obstacle avoidance, and human interface tasks. So we're definitely looking for people interested in meshing, texturing, and object recognition. People interested in image processing, high performance computing, and sensor fusion may also apply.

    https://www.wired.com/2016/11/darpa-alias-autonomous-aircraft-aurora-sikorsky/

    Also, we definitely have internships available for the Summer. Talk to David Glowny, a sophomore in the CS department. He did an internship last Summer with our group, and helped build a great world model object visualization system for our ground control interface.

  5. Wed is a regular lecture.

  6. Iclicker questions.

  7. 10_1 Reflection and Environment Maps.

  8. 10_2 Bump Maps.

    Perturb the surface normals to emulate small details (bumps).

  9. 10_3 Compositing and Blending.

    Some of this material was removed from the current OpenGL, and some of the remaining stuff hasn't been included in WebGL (yet?). So, you have to learn just the high-level stuff, like what this is and why it's interesting.

    1. Compositing, using partial transparancy, creates complex scenes by blending simple components.
      1. It's hard to do accurately in the pipeline.
      2. Drawing objects back-to-front into the color buffer is a good heuristic.
      3. Clamping and accuracy are concerns.
    2. Fog effects provide a depth cue that helps the viewer understand the image.
    3. Anti-aliasing improves the image quality when there are small objects.
  10. 10_4 Imaging Applications.

    1. The GPU is so powerful that people figure ways to use it to do image processing things that it wasn't intended to.
    2. That started the field of GPGPU.
    3. Nvidia watched this happening and added new features to the GPU, like double-precision IEEE-standard floating point computations, to make this easier.
    4. The top line of Nvidia GPUs is designed for scientific computation and doesn't even have video outputs.
    5. The 2nd fastest known supercomputer has 15000 Nvidia GPUs.
  11. 10_5 Rendering the Mandelbrot Set.

    Show the power of GPU programming.

  12. Videos - commercial applications of graphics

    1. Hydraulic Fracture Animation
    2. Deepwater Horizon Blowout Animation www.deepdowndesign.com
    3. On-board fire-fighting training simulator made for the Dutch navy
    4. Ship accident SMS

CG Lecture 22, Wed 2016-10-26

  1. 3D Object Manipulation in a Single Photograph using Stock 3D Models (Siggraph 2014).

    From here Photo-editing software restricts the control of objects in a photograph to the 2D image plane. We present a method that enables users to perform the full range of 3D manipulations, including scaling, rotation, translation, and nonrigid deformations, to an object in a photograph. As 3D manipulations often reveal parts of the object that are hidden in the original photograph, our approach uses publicly available 3D models to guide the completion of the geometry and appearance of the revealed areas of the object. The completion process leverages the structure and symmetry in the stock 3D model to factor out the effects of illumination, and to complete the appearance of the object. We demonstrate our system by producing object manipulations that would be impossible in traditional 2D photo-editing programs, such as turning a car over, making a paper-crane flap its wings, or manipulating airplanes in a historical photograph to change its story.

  2. Iclicker questions.

  3. Today's big new idea: Textures.

    1. Textures started as a way to paint images onto polygons to simulate surface details. They add per-pixel surface details without raising the geometric complexity of a scene.
    2. That morphed into a general array data format with fast I/O.
    3. If you read a texture with indices that are fractions, the hardware interpolates a value, using one of several algorithms. This is called '''sampling'''. E.g., reading T[1.1,2] returns something like .9*T[1,2]+.1*T[2,2].
    4. Textures involve many coordinate systems:
      1. (x,y,z,w) - world.
      2. (u,v) - parameters on one polygon
      3. (s,t) - location in a texture.
    5. Aliasing is also important.
  4. 9_1 Buffers.

    Ignore anything marked old or deprecated.

    Not a lot of content in this file.

  5. 9_2 Bitblt.

  6. 9_3 Texture mapping.

    Start of a big topic.

  7. 9_4 Texture mapping.

  8. 9_5 WebGL Texture mapping I.

  9. 9_6 WebGL Texture mapping II.

  10. Texture mapping programs.

CG Lecture 21, Mon 2016-10-24

  1. This Wed will be a regular lecture.

  2. There will be no classes next week.

  3. Spring course to consider: ECSE-4740 Applied Parallel Computing for Engineers. Its instructor is excellent.

  4. RPI has a long history in Computer Graphics. E.g. Prof Mike Wozny, Head of ECSE, was one of the founders of IEEE Computer Graphics and Applications, which printed the article by Maureen Stone that I presented last time.

  5. Iclicker questions.

  6. 8_2 Lighting in WebGL.

  7. shadedSphere1, 2 etc programs.

  8. 8_3 Polygonal Shading.

  9. 8_4 Per Vertex and Per Fragment Shading.

  10. 8_E3 Marching Squares.

    Will be omitted.

  11. Computing surface normals.

    1. For a curved surface, the normal vector at a point on the surface is the cross product of two tangent vectors at that point. They must not be parallel to each other.
    2. If it's a parametric surface, partial derivatives are tangent vectors.
    3. A mesh is a common way to approximate a complicated surface.
    4. For a mesh of flat (planar) pieces (facets):
      1. Find the normal to each facet.
      2. Average the normals of the facets around each vertex to get a normal vector at each vertex.
      3. Apply Phong (or Gouraud) shading from those vertex normals.

CG iclicker questions

Thu 2016-09-08

  1. What is your major?
  1. CSYS
  2. ELEC
  3. CSCI
  4. GSAS
  5. Other
  1. What is your class?
  1. 2017
  2. 2018
  3. 2019
  4. Grad
  5. Other
  1. What is the correct order of the graphics pipeline?
  1. vertex-shader fragment-shader rasterizer primitive-assembly
  2. fragment-shader rasterizer primitive-assembly vertex-shader
  3. rasterizer primitive-assembly vertex-shader fragment-shader
  4. primitive-assembly vertex-shader fragment-shader rasterizer
  5. vertex-shader primitive-assembly rasterizer fragment-shader
  1. If you wanted to change all the vertex positions by multiplying each x-coordinate by the same vertex's y-coordinate, the best place to do it is:
  1. with a javascript function
  2. in the vertex shader
  3. in the fragment shader
  4. in the html file
  5. it can't be done.

Mon 2016-09-12

  1. What is the correct order of the graphics pipeline?
  1. vertex-shader fragment-shader rasterizer primitive-assembly
  2. fragment-shader rasterizer primitive-assembly vertex-shader
  3. rasterizer primitive-assembly vertex-shader fragment-shader
  4. primitive-assembly vertex-shader fragment-shader rasterizer
  5. vertex-shader primitive-assembly rasterizer fragment-shader
  1. Why does OpenGL have the triangle-strip object type, in addition to the triangle type?
  1. It leads to smaller, faster, graphics objects.
  2. It leads to bigger, faster, graphics objects.
  3. It's not possible to split some complicated polygons into lots of simple triangles, but you can split them into triangle-strips.
  4. The standards writers were being paid by the word.
  1. What is the physical principle underlying LCD?
  1. Fire an energetic electron at a rare earth atom and a photon of light is emitted.
  2. Plowing your family farm as a kid can suggest an way to invent electronic television.
  3. A solution of corkscrew shaped molecules can rotate polarized light.
  4. Putting your finger close to a capacitor can change its capacitance.
  5. If two coils of wire are close, then an alternating current in one can induce a current in the other.
  1. Standards
  1. allow programmers to move between projects
  2. allow different types of hardward to be substituted in.
  3. ..... operating systems ......
  4. allow vendors to lock in customers.
  5. can prevent the latest HW from being used to its fullest.

Wed 2016-09-14

  1. What is your favorite platform/OS?
  1. Windows
  2. Linux
  3. Mac
  4. Android
  5. Other
  1. Color printing on a sheet of paper exemplifies
  1. additive color
  2. subtractive color
  3. multiplicative color
  4. divisive color
  5. exponential color
  1. Red is a primary color of which color space?
  1. additive color
  2. subtractive color
  3. multiplicative color
  4. divisive color
  5. exponential color
  1. Major components of the OpenGl model as discussed in class are:
  1. Objects, viewer, light sources, planets, material attributes.
  2. Still cameras, video cameras, objects, light sources.
  3. Objects, viewer, light sources, material attributes.
  4. Colored objects, black and white objects, white lights, colored lights
  5. Flat objects, curved objects, near lights, distant lights.
  1. How do you draw a pentagon in WebGL?
  1. Split it into triangles.
  2. Split it into triangles if it is concave, otherwise draw it directly.
  3. Split it into triangles if it is convex, otherwise draw it directly.
  4. Draw it directly.
  5. Split it into hexagons.

6. If you want your javascript program to send a color for each vertex to the vertex shader, what type of variable would the color be?

  1. uniform
  2. varying
  3. attribute
  4. dynamic
  5. static

Mon 2016-09-19

  1. When using the vector rule to rotate a point p about an axis by an angle:
  1. Neither the point nor the axis need to be of any particular length.
  2. The point must be at a distance one from the origin.
  3. The axis must be of unit length.
  4. Both B and C.
  5. The point's distance from the origin must equal the axis's length.
  1. In the OpenGL pipeline, the Primitive Assembler does what?
  1. fits together pieces of ancient Sumerian pottery.
  2. rotates vertices as their coordinate systems change.
  3. creates lines and polygons from vertices.
  4. finds the pixels for each polygon.
  5. reports whether the keyboard and mouse are plugged in correctly.
  1. To draw into only part of the graphics window you would call:
  1. gl.drawpart
  2. gl_Position
  3. gl.viewport
  4. gl.window
  5. There's no builtin routine; you have to scale your graphics yourself to achieve this.
  1. If you do not tell OpenGL to do hidden surface removal, and two objects overlap the same pixel, then what color is that pixel?
  1. OpenGL throws an error.
  2. the closer object
  3. the farther object
  4. the first object to be drawn there
  5. the last object to be drawn there
  1. When rotating an object, what can happen to an object?
  1. Straight lines might turn into curves.
  2. Straight lines stay straight, but angles might change.
  3. Straight lines stay straight, and angles don't change, but distances may change, either longer or shorter.
  4. Straight lines stay straight, and angles don't change, but distances might get longer.
  5. Straight lines stay straight, and angles and distances don't change.

Thurs 2016-09-22

  1. Multiplying a complex number x+iy by e^(i pi/4) is equivalent to doing what to the point (x,y):
  1. Rotating it by 90 degrees.
  2. Rotating it by 90 radians.
  3. Rotating it by 45 degrees.
  4. Translating it by 90 degrees.
  5. Nothing; this doesn't change the point.
  1. If ''i'' and ''j'' are quaternions, what is ''i+j''?
  1. ''-k''.
  2. ''0''.
  3. ''1''.
  4. ''i+j'', there is no simpler representation.
  5. ''k''.
  1. If ''i'' and ''j'' are quaternions, what is ''ij''?
  1. ''-k''.
  2. ''0''.
  3. ''1''.
  4. ''i+j'', there is no simpler representation.
  5. ''k''.
  1. The quaternion ''i'' represents what rotation?
  1. 180 degrees about the x-axis.
  2. 90 degrees about the x-axis.
  3. 180 degrees about the y-axis.
  4. 90 degrees about the y-axis.
  5. no change, i.e., 0 degrees about anything.

Mon 2016-09-26

  1. Which rotation methodology is best when working with nested gimbals?
  1. vectors
  2. quaternions
  3. matrices
  4. Euler angles
  5. None are particularly good.
  1. Which make it easy to combine two rotations into one?
  1. vectors
  2. quaternions
  3. matrices
  4. Euler angles
  5. B and C
  1. Why would you want to send a variable to a vertex shader that has the same value for every vertex?
  1. the vertex's coordinates
  2. the vertex's color
  3. the object's global orientation
  4. the location of the global light source
  5. C and D.
  1. How would you send that variable to the vertex shader?
  1. as a varying variable.
  2. as a uniform variable.
  3. in another array similar to the vertex array.
  4. with a bindBuffer call.
  5. with a bufferData call.

Thurs 2016-09-28

  1. A mouse reports which type of position?
  1. Absolute
  2. Relative
  3. Either, depending on which button you press.
  1. A tablet reports which type of position?
  1. Absolute
  2. Relative
  1. Which of these can easily be a logical keyboard?
  1. a physical keyboard
  2. a tablet with handwriting recognition
  3. a virtual keyboard on the screen, selecting letters with the mouse
  4. voice input and recognition
  5. all of the above

4. Which of these mode scales up better when there are many input devices, most of which are quiet most of the time.

  1. event
  2. request

Wed 2016-10-19

  1. If you use a separate buffer that's 16 bits deep to store object ids for picking, then how many different object ids can you have? Assume that the frame buffer is 1024x1024.
    1. 1,048,576.
    2. 16x1024x1024.
  2. The default OpenGL camera is:
    1. At the origin and looking up the Y axis.
    2. At (1,0,0) and looking towards the origin.
    3. At \(X=\infty\) and looking towards the origin.
    4. At the origin and looking down the Z axis.
    5. At the origin and looking up the Z axis.
  3. (Sometimes I write vectors horizontally since they're easier to type.) The 3D homogeneous point (1,2,3,4) is equivalent to which Cartesian point?
    1. (1,2,3)
    2. (1,2,3,1)
    3. (1,2,3,4)
    4. (1/4, 2/4, 3/4)
    5. (1/4, 2/4, 3/4, 4/4)
  4. Translating the 2D homogeneous point by (1,2,3) by (in Cartesian terms) dx=1, dy=2 gives which new homogeneous point?
    1. (1,2,3)
    2. (1,2,3,4)
    3. (2,4)
    4. (2,4,3)
    5. (4,8,3)
  5. This is a homogeneous 3D translation matrix: \(\begin{pmatrix} 2&0&0&2\\ 0&2&0&3\\0&0&2&4\\ 0&0&0&2 \end{pmatrix}\) Where is the Cartesian point (0,0,0) translated to?
    1. (0,0,0)
    2. (1,3/2,2)
    3. (1,3/2,2,1)
    4. (2,3,4)
    5. (2,3,4,2)

Thu 2016-10-20

  1. Which matrix effects your desired camera position?
    1. modelview
    2. projection
  2. Which matrix effects your desired clipping?
    1. modelview
    2. projection
  3. The default OpenGL clipping cube has size:
    1. 2
    2. 10
    3. 1
    4. 0
    5. 1024

Mon 2016-10-24

  1. In the Phong lighting model, the shininess parameter applies to which type of lighting?

    1. ambient
    2. diffuse
    3. specular
    4. flat
    5. Lambertian
  2. In which type of lighting, does the position of the viewer affect the light?

    1. ambient
    2. diffuse
    3. specular
    4. flat
    5. Lambertian
  3. What law determines the angle of the reflected light?

    1. Fresnell
    2. Parkinson
    3. Newton
    4. Snell
    5. Gresham
  4. What law determines what fraction of light enters a transparent material, and what fraction is reflected?

    1. Fresnell
    2. Parkinson
    3. Newton
    4. Snell
    5. Gresham
  5. What law determines how certain bureaucracies operate?

    1. Fresnell
    2. Parkinson
    3. Newton
    4. Snell
    5. Gresham
  6. Here are several possible levels of shading.

    1. Bilinearly interpolate a surface normal at each pixel from normals that you specified at each vertex. Then normalize the length of each interpolated normal vector. Evaluate the Phong lighting model at each pixel from the interpolated normal. That is called Phong shading.
    2. Bilinearly shade the polygon, triangle by triangle, from the colors you specified for its vertices.
    3. Shade the whole polygon to be the color that you specified for one of the vertices.
    4. Use the Phong lighting model to compute the color of each vertex from that vertex's normal. Bilinearly interpolate that color over the polygon. That is called Gouraud shading.

    Which one is the best?

  7. Which one of the above is the slowest?

Wed 2016-10-26

  1. Which of the following lighting methods can produce a small highlight inside a triangle, which is brighter than the brightest vertex?

    1. Goudaud
    2. Phong
    3. Flat (making the whole triangle the same color).
  2. What is the purpose of clipping the value of dot(L,N) in the following code:

    float Kd = max( dot(L, N), 0.0 );

    1. so that faces facing away from the viewer are not lit.
    2. so that faces facing away from the light source are not lit.
    3. so that faces that are too far away from the light source are not lit.
    4. so that faces that are too close to the light source are not lit.
    5. so that shiny faces are lit properly.
  3. Faces whose normals point away from the viewer are always hidden when the object is

    1. Watertight
    2. Not watertight.

CG Lecture 20, Thu 2016-10-20

  1. Estimated grades to date will be uploaded to LMS. It was computed as follows:

    1. The homeworks were scaled to have the same weight, added, and made a total weight of 48%.
    2. The midterm exam was weighted at 48%.
    3. Each iclicker class was given 1 if at least one question was answered. All the iclickers together were weighted at 4%.
    4. The class average was 76%. That would earn a B+.
  2. Handwritten notes from Wed.

  3. I added notes to yesterday's posting giving the big ideas from the slide sets.

  4. Iclicker questions.

  5. New slides:

    1. 7_4 Shadows.

      Big idea:

      1. If there is one light source, we can compute visibility with it as the viewer.
      2. Somehow mark the visible (i.e., lit) and hidden (i.e., shadowed) portions of the objects.
      3. Then recompute visibility from the real viewer and shade the visible objects depending on whether they are lit or shadowed.
      4. This works for a small number of lights.
  6. 7_5 Lighting and shading I.

    Big big topic.

  7. This Woman sees 100 times more colors than the average person.

  8. Phong lighting model: The total light at a pixel is the sum of

    1. Incoming ambient light times ambient reflectivity of the material at the pixel,
    2. Incoming diffuse light times diffuse reflectivity times a factor for the light source being low on the horizon,
    3. Incoming specular light times specular reflectivity times a factor for the eye not being aligned to the reflection vector, with an exponent for the material shininess,
    4. Light emitted by the material.
  9. That is not intended to be completely physical, but to give the programmer lots of parameters to tweak.

  10. In OpenGL you can do several possible levels of shading. Pick one of the following choices. Going down the list makes the shading better but costlier.

    1. Shade the whole polygon to be the color that you specified for one of the vertices.
    2. Bilinearly shade the polygon, triangle by triangle, from the colors you specified for its vertices.
    3. Use the Phong lighting model to compute the color of each vertex from that vertex's normal. Bilinearly interpolate that color over the polygon. That is called Gouraud shading.
    4. Bilinearly interpolate a surface normal at each pixel from normals that you specified at each vertex. Then normalize the length of each interpolated normal vector. Evaluate the Phong lighting model at each pixel from the interpolated normal. That is called Phong shading.
    5. Maureen Stone: Representing Colors as 3 Numbers (enrichment)
    6. Why do primary schools teach that the primary colors are Red Blue Yellow?
  11. Summary of the new part of shadedCube:

    1. var nBuffer = gl.createBuffer();

      Reserve a buffer id.

    2. gl.bindBuffer( gl.ARRAY_BUFFER, nBuffer );

      1. Create that buffer as a buffer of data items, one per vertex.
      2. Make it the current buffer for future buffer operations.
    3. gl.bufferData( gl.ARRAY_BUFFER, flatten(normalsArray), gl.STATIC_DRAW );

      Write a array of normals, flattened to remove metadata, into the current buffer.

    4. var vNormal = gl.getAttribLocation( program, "vNormal" );

      Get the address of the shader (GPU) variable named "vNormal".

    5. gl.vertexAttribPointer( vNormal, 3, gl.FLOAT, false, 0, 0 );

      Declare that the current buffer contains 3 floats per vertex.

    6. gl.enableVertexAttribArray( vNormal );

      Enable the array for use.

    7. (in the shader) attribute vec3 vNormal;

      Declare the variable in the vertex shader that will receive each row of the javascript array as each vertex is processed.

    8. The whole process is repeated with the vertex positions.

      Note that the variable with vertex positions is not hardwired here. You pass in whatever data you want, and your shader program uses it as you want.

  12. 8_1 Lighting and shading 2.

CG Homework 7, due Thu 2016-10-27 9am

  1. (4 pts) Use the homogeneous matrix to project homogeneous points onto the plane x+2y+4z=5, with COP at the origin. What does the point (1,2,4,5) project to? Give the answer as a Cartesian point.

  2. (4 pts) Repeat the previous question with the COP changed to (1,1,1,1).

  3. (6 pts) Do exercise 5.6 on page 272 of the text,

  4. (6 pts) This question will take some thinking).

    Imagine that you have an infinitely large room illuminated by one infinitely long row of point lights. This figure shows a side view of the room.

    The lights are h above the floor and are 1 meter from each other. Assume that the ceiling above the lights is black and that no light reflects off of anything.

    An object at distance d from a light gets illuminated with a brightness \(\frac{1}{d^2}\).

    Each point on the floor is illuminated by all the lights, but more brightly by the closer lights.

    A point p directly below a light will be a little brighter than a point q halfway between two such points. That is the problem --- we want the floor (at least the part directly below the line of lights) to be evenly lit, at least within 1%.

    However, the higher the line of lights, the more evenly the floor will be lit.

    Your question is to tell us what is the minimum value for h so that the line of the floor below the line of lights is evenly lit within 1%.

    ../../images/hw-lights.png

    E.g., the brightness at p is

    \(\sum_{i=-\infty}^{\infty} \;\;\; \frac{1}{h^2+i^2}\)

(Total: 20 points.)