Raytracer Project


Serene is a path tracer I made for my CS 488 (Introduction to Graphics) class’ final project. The goal of this project was to get as close to photorealism as possible in a reasonable amount of time, and I am really happy with the results. This all took about a full two weeks worth of work, and I honestly had a lot of fun working on this.

Below is a showcase of various features of Serene.

Core Features

Global Illumination

Global illumination was implemented by using path tracing. This was achieved by using Monte Carlo integration of the rendering equation, with variable sampling.

Cornell room globally illumiated

Figure 1: Globally illuminated cornell room with two spheres

Reflections

Reflections were implemented by recursively issueing rays from the point of intersection of the previous ray, where the reflection angle is defined by the normal and incident ray.

Cornell room with two diffuse balls Cornell room with one diff and one reflective

Figure 2: (Left) Two non-reflective diffuse spheres. (Right) One diffuse and one reflective sphere

Refractions

Refractive materials bend light according to their index of refraction. Snell’s Law is used to find the change in angle as a ray crosses the boundary of materials with different indices of refraction (IOR).

When the solution to this equation does not exist, there is total internal reflection, and the material acts like a reflective one. Additionally, refractive materials start acting more like reflective materials as incident angles grow closer to becoming tangent with the surface. The Fresnel equation defines how much of the light is refracted and how much of it is reflected, but it is quite expensive to implement. Schlick’s approximation to the Fresnel equation however gives very good results, which is what I implemented in Serene.

Cornell room with two diffuse balls Cornell room with one diff and one refractive

Figure 3: (Left) Two non-refractive diffuse spheres. (Right) One diffuse and one refractive sphere

Old serene

Figure 4: Another example, refraction infront of textured objects

Texture mapping

Texture mapping was implemented by using texture cordinates (u, v) of primitives, generated from parametric points, and meshes (include with the wavefront obj files), and using the u,v values to extract the diffuse color from a specified image file.

Showcase Earth

Figure 5: (Left) A simple sphere in a simple room. (Right) Exact same scene, but with texture mapping

Depth of Field

I implemented a “blur” effect to emulate depth of field by introducing an aperture radius amount to the render settings to mimic a realistic camera with a lens rather than a pinhole camera. Instead of using the same eye point of the camera, rays are generated stochastically around a circle with the radius equal to the aperture radius. When using larger radiuses, objects closer to the focal plane (defined by the render setting focal distance) are sharper, while objects outside the radius are sampled less, resulting in a depth of field effect.

Figure 6: (Left) No depth of field. (Right) Depth of field with aperture radius 1.8. Both rendered with 500 samples

Normal mapping

Normal mapping was implemented by using tangents and bitangents of primitives and meshes, and using those and u,v at specific points to modify the normals according to a specified normal map.

Figure 7: A sphere with snowball-like normal map.

Animation

Animation was implemented by adding a lua interface to add transforms (translations and rotations) at key frames, and linearly interpolating the transforms between the keyframes. Will show an example at the end (no spoilers!)

Soft shadows

I implemented area light sources to represent physically accurate lighting, where many points are sampled towards the area of a light source to find shadow rays and the contribution is averaged out, resulting in soft shadows.

Cornell room with two diffuse balls Cornell room with one reflective and one refractive

Figure 8: (Left) Soft shadows for diffuse objects. (Right) Soft shadows and caustics for reflective and refractive objects.

Stochastic (box-tent filter) Antialiasing

Since my final scene uses textures over large objects, it made little sense to implement adaptive antialiasing in the renderer as was stated in my original proposal. Instead, I used a sampled tent filter algorithm, to better approximate the sinc noise filter (which is ideal for removing noise), to get better antialiasing performance without having to use too many subpixel samples as with a simple box filter.

Comparison of fitler graphs

Figure 9: Comparison between sinc filter (blue), tent (triangle filter (red), and box-tent filter used in Serene (green).

Image credits: Nathan Reed. (https://computergraphics.stackexchange.com/questions/3868/why-use-a-tent-filter-in-path-tracing)

These are the results:

Figure 10: (Left) No antialiasing, 40 samples. (Right) Antialiasing 2x2, 10(x4) samples. Both took the exact same time to render.

Final Scene

The final scene is a snow globe and a toy deer on an old wooden table. I really wanted to catch a serene vibe with this scene, and although I wish I had more time to find more photorealistic models to use in the scene, I think the lighting makes up for a lot of it, and I’m quite happy with how the countless hours I spent on the project came together. The glass part of the globe uses a very low IOR, of 1.1, in order to make sure the inside of the globe can be seen. There are 200 snow particles in the form of a sphere, interacting with the scene the same way as every other object.

Serene

Figure 11: The final scene, rendered with 500 samples, 512x512.

Bonus animated version of this scene!

Serene animated

Figure 12: Main scene animated, rendered with 100 samples, 256x256, 24 fps

Miscellaneous features

These are some features to improve my QoL while working on the project.

Multithreading

One of the great things about raytracing is how so many code pathways are independent, which means that it is open to benefit a lot from multithreading. It was a very simple implementation, where the pixel space was segmented based on columns.

Figure 13: Performance comparison based on (log2) number of threads, ran on a 4-core hyperthreaded intel CPU.

Cosine Hemisphere sampling

Instead of randomly sampling hemisphere points during ray casting, I used a stratified sampling technique to segment the hemisphere into equal area points and sampled based on that. This resulted in more evenly distributed points being sampled, and much less noise overall. (Sorry don’t have a comparison image for this, but here is someone else’s before and after.

Progress bar

I got pretty frustated not knowing how long far along my renders were, so I made a very simple progress bar that reports on the proportion of pixels that have been sampled.

Figure 14: Progress bar.

Conclusion

Although it was a lot of work, I had a lot of fun working on the project and, most importantly, learned a lot. As someone who didn’t know anything about what path tracing was even just a month ago, I am quite happy with the results that I have achieved. However, there were a lot of improvements I had on my backlog, that I couldn’t get to. Most of those were performance improvements I wish I could have implemented, e.g. segmentation based on samples instead of pixels, progressive rendering, kd-trees for mesh intersection, image preview, etc.

Lastly, I am grateful for all the references I found on the internet, which are all included in my report. Links to both the project source code and the complete report coming soon, inshAllah.