Recently at the Game Developers Conference as well as at NVIDIA's launch of the GeForce 6/NV40, one of the highlights of both events was the demonstration of Epic Games' next-generation engine called UnrealEngine3 (UE3 for short). Although the engine is still about 2 years away from completion, its public showing in its current state left almost everyone in awe of the technologies showcased and as explained in snippets by Epic's co-founder Tim Sweeney.

We took the opportunity afforded by Tim, having recently informed this interviewer that further discussions of the game engine is possible after both events, by asking him in a little more about UnrealEngine3.


You have said that UE3 will target DX9-based video cards as the absolute lowest denominator, even to the point of saying DX9 cards such as the GeForceFXs and Radeon 9x00s will be brought to their knees in terms of performance. Using Valve's yet-to-be-available Half Life 2 as an example, will UE3 be at all scalable starting from lesser-than-DX9-level video cards (by making available, perhaps, DX7 and/or DX8 paths) or did you really mean what you said, i.e. nothing but a DX9 video card will be able to run UE3-based games? If the latter is the case, you are saying you are not worried at all by gamers that, by the time UE3-games are made available, still own pre-DX9 (like the GeForce3 or Radeon 8500) video cards?

DirectX9 is our minimum hardware target for Unreal Engine 3. A great deal of generalization, improvement, and even simplification has been made possible by eliminating legacy code paths and formulating all rendering around fully-general pixel shader programs.

QUOTE
on the next-generation UnrealEngine3

"A major design goal of Unreal Engine 3 is that designers should never, ever have to think about "fallback" shaders..."

Tim Sweeney
Epic Games

If we were going to ship a PC game on UE3 this year, we certainly would have aimed lower, since those GeForce4 MX's are very prevalent on very low-end PC's today. But we're looking further ahead, and as a result Unreal Engine 3 has become a great engine for next-generation console games and PC games aimed at the mass market in early 2006.

NVIDIA's latest GeForce 6 series of video cards has support for Shader Model 3.0 which is exposed, but not useable, in the publicly available version of DirectX9.0b. With an anticipated release of a revision of DX9 that allows the use of SM 3.0, could you tell us what are the more interesting and useable advantages SM3.0 offers over the current SM 2.0 model? Referencing the two public demos of UE3, what kind of pixel and vertex shader 3.0 were used?

PS 3.0 utilizes a wide range of optimizations, from 64-bit frame-buffer blending to looping and dynamic conditionals for rendering multiple light interactions in a single pass without requiring a combinatorical explosion of precompiled shaders.

Our pixel shaders in the Unreal Engine 3 demos are typically 50-200 instructions in length, and are composed from a wide range of artist-controlled components and procedural algorithms.

Our vertex shaders are quite simple nowadays, and just perform skeletal blending and linear interpolant setup on behalf of the pixel shaders. All of the heavy lifting is now on the pixel shader side -- all lighting is per-pixel, all shadowing is per-pixel, and all material effects are per-pixel.

Once you have the hardware power to do everything per-pixel, it becomes undesirable to implement rendering or lighting effects at the vertex level; such effects are tessellation-dependent and difficult to integrate seamlessly with pixel effects.

More on shader models. With the GeForce 6 (and presumably other future SM 3.0 parts from other Independent Hardware Vendors) being made available, do you prefer Pixel Shader 3.0 and writing "fallbacks" Pixel Shader 2.0, or do you prefer 2.0 and write extra codes for 3.0 usage? What about Vertex Shader 3.0? Will it be used for effects that can't be done on Vertex Shader 2.0 hardware? What examples would those effects be?

A major design goal of Unreal Engine 3 is that designers should never, ever have to think about "fallback" shaders, as Unreal Engine 2 and past mixed-generation DirectX6/7/8/9 engines relied on. We support everything everywhere, and use new hardware features like PS3.0 to implement optimizations: reducing the number of rendering passes to implement an effect, to reduce the number of SetRenderTarget operations needed by performing blending in-place, and so on. Artists create an effect, and it's up to the engine and runtime to figure out how to most efficiently render it faithfully on a given hardware architecture.

There are currently two floating point implementions, 32-bit and 24-bit. You have said that UE3 requires 32-bit for accuracy. Can you provide us with specific (to UE3 and not in general) examples where 24-bit floating point hardware will either not run UE3-based games at all or will present artifacts? Essentially, what are the specific precision issues with 24-bit FP vis-à-vis 32-bit FP in UE3?

Unreal Engine 3 works best with 32-bit floating point, but supports 24-bit floating point. Currently there are a few cases where minor artifacts are visible with 24-bit floating point. As shaders grow more complex, these will likely be amplified. But even with 24-bit precision, the scene quality looks far higher than, for example, Unreal Engine 2 scenes rendered with 8-bit-per-component precision.