One of the key things NVIDIA have been talking about since the introduction of NV40 is High Dynamic Range Rendering (HDR). There appears to be a bit of a disconnect here though as whenever you deal with "higher quality pixels" they are always going to need more bandwidth and although you can trade that off to a certain extent by doing more internally in the shaders, making fewer external calls, there is still a large difference between the very low end and high end in terms of the bandwidth available to there point where the current generation doesn't have HDR capabilities at the low end, but does on the mainstream and high end parts. How can the disconnect between the very high and and low end be managed in pushing photorealism two all parts of the spectrum?

That's actually one of the beauties of HDR as you can rendering in same game two different modes with one mode being photo-real. And, when you say photo-real, I actually mean professional quality photo-real. Ansel Adams is famous not because he took a bunch of pictures of rocks, but because he was able to get the dynamic range of photographs and he displayed it in monochrome. But the dynamic range that he captures is unnatural, it beyond the dynamic range of the camera, and that his genius - he was able to take a camera that had basically 8, 9 or 10-bits of dynamic range, lets say, but somehow get a lot more than that and that's the kind of images that we're going to express.

One of the key inflection points we've had over the past year is PCI Express; what did this mean to NVIDIA?

High speed interfaces between the graphics chip and the CPU are so incredibly important and obviously one thing that it enabled us to do was SLI and TurboCache was another example of the type of benefit that you can get. The PCI Express bandwidth is at a level now that is consistent with a 64-bit DDR level framebuffer and so its pretty good if you just thought of it as a framebuffer interface.

When you have something that fast you can really re-think everything that you do. You know, if you could fly at 2000MPH or get your car to be able to fly you would think about the way you work, you would think about the way you travel and you'd do everything a little bit differently - PCI Express, in a way, is like that and you really have to think, fundamentally, about the way that you architect and what you can do and when we asked ourselves those questions that's when SLI and TurboCache came along.

That's enabled you to do something with the PCI Express, but what about the content side of things? What would you expect to see here and are you doing anything to push it forward?

Well, we invest more in content developers than just about anybody in the world and the invention of Cg, working with Microsoft on HLSL, the notion of programmable shaders, all the SDK's that we create, all the shader programs we write and give away, all the Developer Technology and Relations engineers we have to help them build better content - we deeply care about the game developers and them being able to use the technology.

Greater bandwidth gives you the ability to download textures more quickly and that's a big, big, deal and at some level the bus bandwidth limits the amount of geometry you can send over the bus so there is a level of fidelity that you are at a theoretical limit to be able to achieve with every bus, so the level of fidelity of geometry and textures is limited by each generation [of the host interface]. Now, of course we can get clever with things like Perlin Noise shader generators, to be able to synthetically create marble, wood and things like that - procedural textures - and that's en example of computing replacing content and we'll be able to do those things, but fundamentally there is still that theoretical limit.

There's long since been elements of graphics research which haven't made their way on to consumer PC graphics products yet, one of them being geometry compression and another being hardware tessellation - which could be seen as another form of geometry compression...

Higher Order Surfaces, Procedural Textures, they are all forms of compression and we're a big fan of high order surfaces. I don't know if you remember NV1, but is was based on higher order surfaces because there was a concern that the PC would not have a floating point unit and that the PCI bus that came to be wasn't going to be available and we'd be stuck on the local bus, which is why we invented curved surfaces. It turned out that our tools just weren't ready for it then, but sub-division surfaces is going to happen. In games today there is often polygonal detail that we'd like not to see and subdivision surfaces will eliminate all of them.