Introduction

Deferred Rendering attempts to combine conventional rendering techniques with the advantages of image space techniques. In this article we separate lighting from rendering and doing so make lighting a completely image-space technique, this has some disadvantages but also some key advantages. These advantages include:

  • Lights major cost is based on the screen area covered
  • All lighting is per-pixel and all surfaces are lit equally
  • Lights can be occluded like other objects, this allows fast hardware Z-Reject
  • Shadow mapping is fairly cheap

The main disadvantages are:

  • Large frame-buffer size
  • Potentially high fill-rate
  • Multiple light equations difficult
  • High hardware specifications
  • Transparency is very hard

History

The concept of deferred lighting seems to appears in Takahashi and Saito’s "Comprehensible rendering of 3-D shapes"[1] with the earlier work in Perlin’s "The Image Synthesizer"[2] providing important contributions to the idea of image space post-processing for shading evaluation. Takahashi and Saito paper is actually on NPR techniques (which Mitchell has adapted for real-time [3]) but they mention that it could be adapted to photo-realistic rendering (which is what this article presents). The major contribution Takahashi and Saito introduced were Geometry Buffers (G-Buffers), these are the fundamental primitive that allows deferred lighting to work.

Deferred Rendering or Deferred Shading or Deferred Lighting?

The term deferred rendering is used to describe a number of related techniques; they all share a deferment stage but differ in what portion of the pipeline is deferred. This article only defers the lighting portion of the pipeline, all other parts can be done in what ever way you like, and the only requirement is that the G-Buffers are filled prior to the lighting stage. Deferred Shading is typically where the actual surface shader execution is deferred, this is the model presented by UNC Pixel Plane project [4].

Important Concepts

Textures as arrays

Textures can be as used as 1-3D arrays; we use this feature a lot, so its worth examining the major differences between textures and arrays first. Textures are not conventional arrays and we can use these extra properties to our advantage, but there are also consequences that we most work around.

Out of bounds

Textures are very good at handling out of bounds array access, the wrap mode texture states allow us to control how out of bounds is handled on each axis separately. The usual mode is to use clamp to repeat the last texel ad infinitum, alternatively we can use a wrap setting to do a modulus operation on the indices for free.

Linear Interpolation

Bilinear filtering gives us free linear interpolation between samples which is very nice when we are storing a discrete approximation to a function. The main problem is that we cannot control this on a per-axis basis. A common pattern is to store the discrete approximation to a 1D function along one axis of the texture, and store different functions along the other axis. In this case we want only linear not bilinear interpolation; there are 2 approaches to achieving linear interpolation from a texture.

  1. The manual method, using point sampling samples the texture twice and does the lerp in the shader.
  2. Pad the texture, here we repeat each sample so that the 'free' linear interpolation on the axis we don't want comes out with the same value as if we were only point sampling on that axis.

The 2nd approach costs texture space due to the extra copy of each sample but the pixel shader only samples the texture once to retrieve the linearly interpolated result, this saves both texture lookups and arithmetic instructions, I usually use this approach.

Projection

When sampling a texture, we can optionally specify to divide each coordinate by the W coordinate. We can use this to get a free divide whenever we want to look something up. This saves pixel shader instructions especially when converting a perspective position into a texture lookup.