Conclusion

Ray tracing isn't the holy grail of real-time rendering-- it should be obvious that it has serious problems of its own to conquer, like rasterisation, before it can offer acceptable real-time image quality. Despite what a lot of the current press seems to suggest, though, the two techniques aren't mutually exclusive in the first place.

A hybrid approach will probably offer the best of both worlds. Many ray tracers already replace primary rays with rasterisation and only compute secondary rays through traditional ray tracing. The ability to spawn rays in the GPU's shader core would solve many hard problems that rasterisation engines have to face. There certainly seems to be no major image quality advantage to moving to a ray tracer for a large portion of the common realistic scenes that we want to render.

In my opinion, research in real-time ray tracing is chasing the wrong thing. While it's nice to have a system capable of handling simple refraction and reflection, it doesn't provide a solution to the bigger problem of solving the Kajiya rendering equation. Ray tracing is likely to have a position in real-time rendering similar to its position in the off-line world: a useful technique to use for a few effects but not used by the main renderer.

Instead, solving the indirect lighting problem is the real holy grail of real-time rendering. Of course, at this point, no one even has a serious framework to do so that doesn't require massive amounts of precomputed data, so there's still a lot of work to be done regardless of approach.

Editor's Note

Confused about some of the terms in Deano's discourse?  He's handily compiled a glossary you can reference, and if there's anything you want to discuss that's not reference in the glossary, or you just want to talk about the article in general, we've got a forum thread where you can do both.