CSCI 4530/6530 - Spring 2012
Advanced Computer Graphics
The goal of this assignment is to implement pieces of three different rendering methods that can be used to capture important rendering effects including reflection, color bleeding, and caustics. These different rendering methods are implemented within a single interactive OpenGL viewer similar to previous assignments to help with visualization and debugging. Furthermore, since they are implemented in the same system, hybrid renderings that capture combinations are possible.
Initially, you will only see the ground plane -- that's because the sphere intersection routine has not been implemented! This is your first task. You'll have to do a little work beyond our discussion in class to handle spheres that are not centered at the origin. Note that our OpenGL rendering first converts the spheres to quads, but the original spheres should be used for ray tracing intersection.
Use the ray tree visualization to debug your recursive rays. When
't' is pressed, a ray is traced into the scene through the pixel under
the mouse cursor. You will have to add calls to
RayTree::AddReflectiveSegment in your recursive raytracing
code. The initial main/camera/eye ray is drawn in white, reflective
rays are drawn in red, and shadow rays (traced from each intersection
to the lights) are drawn in green.
./render -input reflective_spheres.obj
./render -input reflective_spheres.obj -num_bounces 1
./render -input reflective_spheres.obj -num_bounces 3 -num_shadow_samples 1
For extra credit, you can implement different strategies for
selecting these random points (e.g., stratified sampling or jittered
samples) and discuss in your README.txt the performance/quality
tradeoffs. Also, for extra credit you can implement other effects
using distribution ray tracing such as glossy surfaces, motion blur,
or depth of field.
./render -input textured_plane_reflective_sphere.obj -num_bounces 1 -num_shadow_samples 1
./render -input textured_plane_reflective_sphere.obj -num_bounces 1 -num_shadow_samples 4
./render -input textured_plane_reflective_sphere.obj -num_bounces 1 -num_shadow_samples 9 -num_antialias_samples 9
Press 'w' to view the wireframe. The quad mesh model is stored in
a half edge data structure similar to assignment 1. Press 's' to
subdivide the scene. Each quad will be split into 4 quads. Press 'i'
to blend or interpolate the radiosity values. Press the space bar to
make one iteration of the radiosity solver, press 'a' to animate the
solver (many iterations), and press 'c' to reset the radiosity
solution. The images below show various visualizations of the classic
Cornell box scene.
./render -input cornell_box.obj
The top row of images shows: the MATERIALS, with wireframe after 2 subdivisions, the RADIANCE after allowing the top 16 patches to shoot their light in the scene, the RADIANCE after many iterations (near convergence), and the smooth interpolation of those values. The bottom row shows: the FORM FACTORS from a patch on the left wall (outlined in red) to all other patches in the scene, the ABSORBED light after light shooting from the top 16 patches, and ABSORBED light after many iterations, and a visualization of the UNDISTRIBUTED light after the top 5 patches have shot their light into the scene.
Your task is to implement the form factor computation and the radiosity solver. You can choose any method we discussed in class or read about in various radiosity references. For the Cornell box scene you do not need to worry about visibility (occlusions). In your README.txt file, discuss the performance quality tradeoffs between the number of patches and the complexity of computing a single form factor.
./render -size 300 150 -input l.obj ./render -size 300 150 -input l.obj -num_form_factor_samples 100 ./render -size 300 150 -input l.obj -num_shadow_samples 1 ./render -size 300 150 -input l.obj -num_form_factor_samples 10 -num_shadow_samples 1
Here is another test scene that requires visibility in the form
./render -input cornell_box_diffuse_sphere.obj -sphere_rasterization 16 12
./render -input cornell_box_diffuse_sphere.obj -sphere_rasterization 16 12 -num_shadow_samples 1
The same scene rendered using ray tracing is too dark. Even when
we approximate global illumination using an ambient term, the scene is
missing the characteristic color bleeding.
./render -input cornell_box_diffuse_sphere.obj -ambient_light 0.0 0.0 0.0
./render -input cornell_box_diffuse_sphere.obj -ambient_light 0.0 0.0 0.0 -num_shadow_samples 1
./render -input cornell_box_diffuse_sphere.obj -ambient_light 0.2 0.2 0.2 -num_shadow_samples 1
./render -input cornell_box_diffuse_sphere.obj -ambient_light 0.2 0.2 0.2 -num_shadow_samples 10
./render -input cornell_box_diffuse_sphere.obj -ambient_light 0.2 0.2 0.2 -num_shadow_samples 100
The first step is to trace photons into the scene. Press 'p' to call your code which will trace the specified number of photons throughout the scene. When a photon hits a surface, the photon's energy and incoming direction are recorded. Depending on the material properties of the surface, the photon will be recursively traced in the mirror direction (for reflective materials) or a random direction (for diffuse materials) or terminated. Don't forget to multiply by the diffuse or reflective colors to decrease the energy of the photon appropriately. How do you decide when to stop bouncing the photons?
A visualization of the photon hits is provided in the PhotonMapping class (shown below left). Press 'l' to toggle the rendering of the photons. A KD-tree spatial data structure is also provided to store the photons where they hit, which will allow you to quickly collect all points within a query boundary box. Press 'k' to toggle the rendering of the kdtree wireframe visualization.
The second step is to extend your ray tracing implementation to
search for the k closest photons to the hit point. The
energy and incoming direction of each photon is accumulated (according
the the surface reflectance properties) to determine how much
additional light is reflected to the camera (added to the raytracing
result instead of the traditional "ambient" lighting hack). To
trigger raytracing with photon gathering, press 'g'. The third image
below is a traditional recursive raytracing of the scene. The
rightmost image adds in the energy from the photon map to capture the
./render -input reflective_ring.obj -num_photons_to_shoot 10000 -num_bounces 2 -num_shadow_samples 10
./render -input reflective_ring.obj -num_photons_to_shoot 500000 -num_bounces 2 -num_shadow_samples 10 -num_antialias_samples 4
./render -input cornell_box_diffuse_sphere.obj -num_photons_to_shoot 500000 -num_shadow_samples 500 -num_photons_to_collect 500 ./render -input cornell_box_reflective_sphere.obj -num_photons_to_shoot 500000 -num_shadow_samples 500 -num_photons_to_collect 500 -num_bounces 1
Similar to the previous assignments. NOTE: MersenneTwister is a high quality pseudo-random number generator. Like drand48/srand48, it can be seeded with a constant to provide repeatable sequences for deterministic behavior while debugging.
Similar to the triangle half-edge data structure you implemented in assignment 1. Spheres are stored both in center/radius format and converted to quad patches for use with radiosity.
The basic rendering engines and visualization tools and image class for loading and saving .ppm files.
Note: These test data sets are a non-standard extension of the .obj file format. Feel free to modify the files as you wish to implement extensions for extra credit.