Programming Assignment #7

by Eftychios Sifakis on November 29, 2020

Due: Wednesday, December 9th. (see the class late policy on the Syllabus)

Synopsis: You get to use WebGL to create a 3-dimensional scene, with your own shaders, models, and associated data (vertex attributes, colors, normals, textures, etc).

Learning Objectives: To familiarize yourselves with the complexities of using the WebGL API and the GLSL shading language to craft a 3-dimensional scene. This includes defining vertex attributes, dispatching them to the GPU and using them within shaders, using vertex colors, normals and texture coordinates (when applicable). Also, using one or more textures to create a more rich appearance, if desired.

Evaluation: Based on the announced evaluation system. You get a check (“3”) if you turn in a viable, and complete submission (even if it just draws a rectangle like the example in the tutorial). “Above and beyond” grades will be awarded for people who make particularly cool pictures.

Hand-in: Will be as a Canvas (the course management system, not the JavaScript drawing library) assignment. [Submission link]. Make sure that you turn in all files needed for your program to run. It is (barely) acceptable to turn in a single HTML file with your program, but much preferable to separate your code into an .html file and a separate .js file containing the JavaScript code, similar to the examples in our GitHub repository (see, e.g. Demos 0 from Week 5). If you submit anything else than a single HTML file, please put everything in a single ZIP archive. It is not acceptable to submit a link to JSbin for this assignment!

Description

our task will be to create a 3D scene, visualized using the WebGL drawing API (as opposed to the “Canvas” 2D drawing API we have used up to and including homework assignment #5). We have seen several examples, in-class, about using this interface, which you can find in subdirectory Week13 of our GitHub repository (check out this and this URL for JSBin versions of some of these examples). When authoring such three-dimensional visualizations, it will be your responsibility to do the following, among others (while online tools like shdr.bkcore.com would do many of these things for you) :

  • Write your own vertex shader and fragment shader; a relatively clean and straightforward way to do this would be to include them in the body of <script> blocks in the HTML code, as we saw in the examples above. You could, in fact, write more than a single pair of fragment/vertex shaders, if you chose to, if you have several objects, each of which requires a different shader.
  • You should compile and link the two shaders into a “program”, as we saw in our examples. You can create more than one “program”, if you wish to use a different shader routine for different objects you are drawing.
  • You should define your own vertex attributes, and uniform variables as we saw in our examples. This involves keeping pointers to attribute/uniform variables by querying the linked program (as we saw in our examples), and also defining the data associated with vertex attributes (and uniforms), dispatching that data to the GPU, and taking any other necessary operations to do the drawing.
  • You should define the geometry of the model(s) being used, by providing a indexed set of vertices that make up triangles. This should also be buffered to the GPU as we saw in class, and used in a drawElements() call inside the draw loop.
  • You should create all necessary transforms that the shaders might need, and dispatch those to the GPU (typically, as “uniforms”).
  • You should load texture images (if you chose to include texture mapping in your implementation), and furnish texture coordinates for your models.
  • You should use the Z-buffer visibility mechanism to leverage the GPU’s capability for performing visibility queries.

In order to secure at least a “satisfactory” (“3”) grade in this assignment, you can think at a minimum of the world you created in your Programming Assignment #5, and ask yourself how you could implement a similar appearance, but using WebGL this time (ideally, this appearance would be significantly richer!). The minimum requirements will be as follows :

  • Your scene should include at least one “polyhedral” object with multiple shaded (as opposed to be drawn as “wireframe”, only by their edges) polygonal facets. Those will be typically be comprised of triangles. Your entire object cannot be all flat! (unless if you include several objects in your world, in which case it’s ok for at least one of them to not be flat). We would like to be able to visually appreciate that the Z-buffer visibility algorithm is actually working … “front facing” triangles/polygons, should hide parts of the objects that are located behind them.
  • You should either include diffuse & specular shading in your objects, or use texture mapping (maybe both!). It will not be acceptable to have the entire polygonal object (or objects) show up as a single color, without any variation due to lighting. It is perfectly fine to use vertex colors to customize the coloration of different parts of the model.
  • We would like to see you use at least three different vertex attributes in your shader. Those could be, for example position/color/normal, position/normal/texture-coordinates, etc. If you use more than one shader pairs in your implementation, this restriction is relaxed to: at least one of the shader pairs should use at least 2 vertex attributes.
  • There should be a way to enact some change in the scene, different than just controlling the camera position. This would typically be some modeling transform that moves some part of the scene with respect to the world coordinates, some motion along a curve, a hierarchical modeling apparatus, etc. But at the very mininum, some modeling transform should be applied (in our in-class examples, this was done by a “rotation” transform applied to the cube model).
  • There should be some way to affect the placement of the camera relative to the scene. In our examples, we have a slider that spins the camera around the scene … similar requirements were stated in programming assignment #5.

Of course, we would like to think you will extend beyond these bare-minimum requirements. Here is a list of possible embellishments that you might wish to consider … doing several of them, or doing them in particularly creative and aesthetically interesting ways will make you competitive for an “above and beyond” (“4”) grade.

  • Combine non-trivial lighting and texturing. We would almost expect this as a prerequisite any submission that would have a chance to compete for a “4” … you should use both specular/diffuse lighting, and texture mapping at the same time (or for different objects).
  • Consider modeling several different objects, subject to different modeling transformations, maybe implementing a hierarchical model, and maybe using different shader programs (i.e. multiple pairs of vertex/fragment shaders, each compiled into its own “program” and used via the useProgram() call before drawing this component). Think whether it makes sense to have multiple modeling transforms for a hierarchical model (separate “uniforms” if using a single shader?), or if you want to use different programs altogether to draw those.
  • Consider using multiple textures at once, or in fancy combinations (think of the “decal” texturing trick we showed in class, for example).
  • Implement some complex objects, or objects with intricate motion.
  • Implement some interesting camera motion.

Leave a Comment

Previous post:

Next post: