Browse the world of real-time computer graphics, and you'll find the oft quoted meme: "One day we will have real-time ray tracing, and graphics will be so much better!" This is because many people hold up ray tracing as the solution to realistic lighting and as such, they view current rasterisation renderers as used in GPUs and consoles today as inherently inferior.

Indeed, according to Wikipedia, rasterisation "is not based on physical light transport and is therefore incapable of correctly simulating many complex real-life lighting situations." Ray tracing, on the other hand, "facilitates more advanced optical effects, such as accurate simulations of reflection and refraction, and is still efficient enough to frequently be of practical use when such high quality output is sought."

So roll on, real-time ray tracing! Or are there some caveats that the proponents of ray tracing handily forget to mention?

What is ray tracing?

The type of ray tracing being promoted for real-time uses is heavily based on a technique properly known as Whitted ray tracing. The Whitted technique is the idea that you can build an image by using the opposite of reality (oddly enough, it's similar to a medieval theory on how your eyes work). At every pixel, the renderer fires infinitesimally thin rods that travel in a straight line until they encounter the closest thing along that line; at that point, they are deflected, absorbed or split. Lighting is done by tracing rays from the intersection point to each light source. The sum of these rays record how much light arrived at this point on the surface, which inherently accounts for effects such as shadows, transparency, refraction and reflection.