Click for a bigger version
On the 3D Independent Hardware Vendor (IHV) front, the past year has seen a very competitive battle between NVIDIA and ATI. This is not only in terms of outright performance differences but also in terms of DirectX9 API features, with each IHV attempting to one-up each other in the latter category. As a Independent Software Vendor (ISV), do these differences (different performance, different features among different IHV parts) affect the way games are developed?

We've been very happy with both NVidia's and ATI's efforts in the past few years. Though they compete aggressively on features, the architectures and common-denominator feature sets are sufficient that we're not held back. The only thing more we could wish for is for the badly underpowered integrated graphics chips from Intel and others to go away or improve enough that they aren't such unfortunate handicaps for game developers.

There have been criticisms of some DirectX9 games not looking much better than a game like Unreal Tournament 2003. Tomb Raider : Angel of Darkness is an example, despite featuring a host of DirectX9 features like floating point textures and different 2.0 Pixel Shaders for certain effects. In your opinion, what and how do the general gaming public determine what is "great looking"? Do you think it wouldn't be too far wrong to say that it is still a matter of detailed and colorful textures, instead of what DirectX9 proposes to provide ("realism")? Are some of the DirectX9 features too "subtle" to notice in its displayed form?

The first DX9 games will mostly use it to add some visual improvements on top of a rendering pipeline designed over a multi-year development cycle and targetted primarily at DirectX7 hardware. That approach doesn't give you anywhere near the full benefit of DirectX9, but it's the commercially reasonable way to go if you're shipping a product in 2004, because there are at least 10X more DirectX7 class cards in gamers' hands than DirectX9 cards.

The really interesting things in shader land will start to happen in the late 2005 to early 2006 timeframe. That's when there will be a good business case for shipping DirectX9-focused games.

Finally, where do you think 3D hardware and CPU technology should be headed? Do you think we are likely see 3D hardware taking over some of the functions of the CPU, going beyond rendering?

I think CPU's and GPU's are actually going to converge 10 years or so down the road. On the GPU side, you're seeing a slow march towards computational completeness. Once they achieve that, you'll see certain CPU algorithms that are amicable to highly parallel operations on largely constant datasets move to the GPU. On the other hand, the trend in CPU's is towards SMT/Hyperthreading and multi-core. The real difference then isn't in their capabilities, but their performance characteristics.

When a typical consumer CPU can run a large number of threads simultaneously, and a GPU can perform general computing work, will you really need both? A day will come when GPU's can compile and run C code, and CPU's can compile and run HLSL code -- though perhaps with significant performance disadvantages in each case. At that point, both the CPU guys and the GPU guys will need to do some soul searching!

 


We'd like to thank Tim for taking the time to do this interview because it's been a rather busy period of late for Epic with the release of the Unreal Tournament 2004 demo. This interview was conducted via email and Tim had to excuse himself from answering several questions that pertain to DirectX9 Shader Model 3.0 (SM3.0) ... we assume he doesn't want to run the risk of unintentionally breaking any NDAs with the various IHVs like NVIDIA and ATI, especially when such IHVs' next-generation offerings are expected to be based on SM 3.0 ... and shouldn't be too far away from release.

If you'd like to comment on this interview, please do so here.