If your code uses multiple precisions because its gone through NVIDIA's 'Way its meant to be played' campaign and has had shaders optimised for their shader architecture do you think you would have invested the effort in optimising for multiple precisions had TWMTBP not been there given that on much of the DX9 hardware already available there would be little to no performance difference?

Dean Calver
Given the budget/time constraints we tend to be able to write few extra code paths. All IHV's actively write code paths optimised for their code, because they know we probably will only be able to do the most common one. Last generation it was ATI writing PS_1_4 shaders, this generation is may be NVIDIA writing custom ones. I really doubt I could justify rewriting a shader that worked well on most hardware except NVIDIA's (and that applies to any IHV), I'd expect the dev-rel or the driver writers to sort it out. Its their job to make their hardware run stock DX9 shaders well.

Chris Egerter
I still would have invested time to try different precision modes. You have to think that in the future all cards will be using full floating point precision and it will be much faster than we have today. I want the engine to be ready for the future as well as supporting current cards, which is why we need to have different rendering paths.

Tom Forsyth
The TWIMTBP campaign, like all developer relations initiatives and documents, is useful for raising consciousness and reminding people that they need to think about these issues.

Yes, the PP [DirectX Partial Precision] hints currently only affect nVidia's boards, and possibly others later on. However, it is not very much work to write your shaders keeping track of where you need the extra precision, and where you do not. You need to do this when considering the PS1.x versions of the shaders, so it's brain-work that needs doing one way or the other. If it is done while writing the shaders rather than as a retro fit, it takes very little effort, in the same way that using bytes and ints and floats appropriately is not taxing for most C programmers.

Every PC game does some amount of card-specific tuning, small or large, and adding PP in the right places is a small amount of work for sizeable potential benefits on some hardware. That is a compelling enough reason for me.

Jake Simpson
I think this would depend on what the shader was, and how heavy it was in GPU processing. In some cases where the shader was used heavily I probably would. If it was a specialist shader that didn't get used much and didn't spend a lot of time in there, then no.

Tim Sweeney
We optimize our games to run on popular hardware because it pleases our customers. We partner with NVidia on their "The Way It's Meant To Be Played" marketing program because it helps achieve our mutual business and marketing goals.

These are separate things; you don't have marketing people optimizing your code, or programmers running your marketing campaign.

"MrMezzMaster"
TWMTBP is just a marketing attempt to minimize the fact that NVIDIA's graphics drivers/cards are not high performing when it comes to writing general shaders (especially in regards to floating point). "Hey, come on in and we'll help you write fast shaders on NVIDIA cards cause you gotta tweak 'em a lot since we're not a fast, general purpose hardware shader." As I've said repeatedly, as a programmer I want to write a single shader with the precision I specify, and the quality I expect, and have it run on all drivers/cards. I don't want to go back to the bad old days where I have to do a ton of per driver/card compatibility work.

Markus Maki of Remedy Entertainment also responded but he gave a summarized answer to all five questions :

Markus Maki
First of all, I think this has turned into a too big deal. For developers it would be nicest if all hardware would work in a similar fashion of course but hey it's not an optimal world :)

For most cases, it is easiest if game developers can just use the available shader models, and get the expected quality (defined by specs and reference rasterizer).

It doesn't make a difference whether the hardware internally works in FX12, FP16, FP24 or FP32 if you only need and expect integer accuracy for example with PS1.x models. And yes, in some cases it may be possible to do lossless degradation, but I'd assume developers are smart in not requesting FP precision if they don't need it.

The more developers explore into what can be done with DX9, the more accuracy they will want - even FP24 will not be enough in the long term.


We'd like to thank the developers who participated (as well as those that didn't respond ;) ). During the course of conducting this interview, we received word that NVIDIAs 50.xx series of drivers (beta) have some changes (compared to current 4x.xx drivers) with regards to precision -- from a quality and fidelity point-of-view, it appears to be good news. We attempted to get a publishable comment from NVIDIA regarding this but have not received any acknowledgement as yet. Of course, internal beta drivers may not (usually) mean public drivers so unless we receive official acknowledgement of this from NVIDIA, we usually take these sort of honest info from our sources with a grain of salt.


  • If you wish to comment on this article please feel free to do so here.