Overview of deferred lighting

There are 3 distinct phases in deferred lighting.

  1. Geometry phase
  2. Lighting phase
  3. Post-processing phase

Each phase uses DirectX Shaders (both vertex and pixel shaders) but the purpose and input is distinctly different, the output of each phase becomes the input of the next with the last phase (Post-processing) having access to the output of both geometry and lighting phase. The majority of the rest of this article will concentrate on the 2nd and 3rd phases, when the G-Buffers are filled ready for use and the image-space techniques that use these buffers to produce a photo-realistic image.

Note: From now on I will use the term ‘shader’ in the Renderman sense of the complete process of rendering an object [7]. Often a ‘shader’ will consist of both a vertex shader and a pixel shader (and sometimes may involve multiples of both).

Geometry phase

The Geometry phase is the only phase that actually uses an objects mesh data, the output is the G-Buffer and the inputs are whatever each object requires. At its simplest, this could just be 2D operations (copying G-Buffer data) this would allow relighting to occur without mesh rendering.

Each geometry shader is responsible for filling the G-Buffers with correct parameters. This is roughly equivalent to a Renderman surface shader, the main difference being that whereas Renderman surface shaders compute the lighting values our system just outputs parameters to the lighting phase.

Usually the depth buffer is used to determine the closest surface at each pixel. If the geometry shaders are quite expensive it may be worth doing a depth set-up phase [8] (here all geometry is rendered to set-up the depth buffer, then the geometry is re-rendered with the actual shaders).

The major advantage over the conventional real-time approach to Renderman style procedural textures is that the entire shader is devoted to generating output parameters and that it is run only once regardless of the number or types of lights affecting this surface (generating depth maps also requires the geometry shaders to be run but usually with much simpler functions).

Another advantage is that after this phase how the G-Buffer was filled is irrelevant, this allows for impostors and particles to be mixed in with normal surfaces and be treated in the same manner (lighting, fog, etc.).

Some portions of the light equation that stay constant can be computed here and stored in the G-Buffer if necessary, this can be used if you light model uses Fresnel (which are usually only based on surface normal and view directional).

Lighting phase

The real power of deferred lighting is that lights are first class citizens, this complete separation of lighting and geometry allows lights to be treated in a totally different way from standard rendering. This makes the artist’s job easier as there is less restrictions on how lights affect surfaces, this allows for easy customizable lighting rigs.

Light shaders have access to the parameters stored in the G-Buffer at each pixel they light. These parameters will be customized to each renderer but to be really useful they most include some standard parameters vital to photo-realistic lighting. These include position, normal, light model parameters and surface colours. In many cases these parameters will be packed but we ignore that for the discussion of light shaders and just assume the parameters are already in a form ready for use.

I’ll try and borrow standard variable names from Renderman and the Phong / Blinn [9][10] lighting equations. Some of these are stored in the G-Buffer, some are properties of the light, and some are calculated in the shader, also not all variables will be used for every shader.