Whitted ray tracing versus Rasterisation

These proven results suggest that traditional Whitted ray tracing has relatively low lighting and image quality, and requires largely static scenes compared to what we are used to already. Not exactly the quantum leap some would like you to believe!

What ray tracing can do that rasterisation can't easily handle is a simple refraction model. Ray tracing can solve surface refraction very naturally. In surface refraction, a ray hits some object and is bent and possibly split into more secondary rays. Accumulating these results gives you basic refraction and reflection.

However, it can't easily simulate nonlinear subsurface scattering and indirect refraction (caustics). Also, the aliasing problem rears its ugly head again here. Most real world surfaces have very fuzzy reflection and refraction, which requires a massive number of rays to simulate. Simulation of any of these effects with rasterisation usually has severe limitations and works mainly for fairly flat and blurry reflection and refraction.

Almost every solution to a problem with ray tracing involves shooting exponentially more rays, which really isn't great for any real-time context. As the quality of a ray tracing render goes up, you end up firing massive amounts of secondary rays into your scene. And even if you assume current and future research will solve the problems of dynamic scenes and will allow so many rays to be cast that real-time distributed ray tracing is possible, you're still left with the indirect lighting problem.

Whether ray tracing will replace rasterisation would depend on whether ray tracing gives a serious improvement in image quality. However, the logic that insists ray tracing looks better than rasterisation goes way back to the 1970s, when rasterisation consisted of a few Gouraud or Phong shaded triangles while ray tracing was adding shadows, reflection and refraction. Things have changed a lot since then.

GPUs today access textures at high speeds, and shader hardware allows textures to be used as global scene data. This approach to global illumination started with shadow maps, a light to surface occlusion term packed in a real-time calculated texture. In other words they're a texture representing shadow secondary rays. This approach allows most global illumination models to be used with a rasterisation-only renderer; it's simply a matter of figuring out how to capture the data in a texture.

A real-time ray tracer would at best only produce comparable image quality to current rasterisation-based renderers unless the scene is highly reflective and refractive. Luckily for rasterisation-based renderers, real-world scenes made of shiny transparent balls are rare. For a scene filled with shiny refractive balls, of course a ray tracer would beat a rasterisation-based renderer. At the same time, though, a completely dynamic scene with lots of overlapping large triangles would kill a real-time ray tracer.

The big problem in quality is that neither ray tracing nor rasterisation has a good solution to indirect lighting. The usual current real-time approach is ambient occlusion. This is a simplification that gives a term to represent some position in space is affected by global ambient lighting.  Currently, there are no real-time approaches to indirect lighting that don't involve massive simplification, usually assuming a static scene so that the radiance transfer function can be precomputed.