John Carmack has said in one of his posts at Slashdot that performance-driven optimizations by IHVs in the form of drivers can result in no quality degradation when integer (FX) codes replace floating point (FP) codes even though he started out with FP in the first place for the next Doom, but only for specific instances. So why start with FP in such instances in the first place? What are the kind of instances where you simply target FP to start with (to achieve the effect you personally want) but is indistinguishable quality-wise if FX is (finally) used instead? Conversely, what are the instances where there will be very noticeable differences between FX12 and FP16 and then FP32? How often would we encounter such instances in a game you are to develop?

Deano Calver
Using float precision is easy, so it reduces development time (which is the really expensive thing), its going to happen, that a lot of shader will be written that could (if time and resources were available) have been done in integers. Its no different from most vertex input (position, normal, uvs etc) being floats, 9 times out of 10 its overkill and integers would do but its easy to use floats so most models are still shipped with float data.

Floats won't be that important for this generation (that include Doom3 BTW) because the artwork has to work on low end cards, it will be the next generation (say 2005/2006) where you won't be able to fake it with fixed point maths.

A good example would be the HDRI, you can make some decent HDRI effects in fixed point math but its usually optimised to a few special cases, that take alot of dev time to get right (code and art tweaks until it looks right) with floating point HDRI you just photograph a shiny ball and use that directly Once you use real HDRI input textures, you couldn't then fake it under fixed point as you can now.

Chris Egerter
Pass :)

Tom Forsyth
There's two separate questions here - precision, and dynamic range. Precision is how many digits I can store. Dynamic range is how far I can move my decimal point. Generally, by going from integer to floating-point, but without changing the number of bits you use to store the number, you lose precision, but gain range.

So for example a floating-point number might have a large range, but not be very precise, which would allow it to store 0.0011, 1.1 and 11000 properly, but not 1.12. Conversely, integers can be very precise, but have small ranges. So for example they might be able to store 1.1, 1.11 and 1.12, but not 110.0 or 0.011.

In some cases, you need high precision, but your numbers don't actually go very large or small. This is where integers (aka fixed precision) are useful. An example is when doing specular calculations - they need to be precise or you get colour banding, but the numbers are still only colours on the screen, so they are all just from 0 to 1.

In other cases such as calculating positions in space, you need a large range (because things can be really big but far away, or really small but close), but the question of whether something is 1.12 metres away or 1.13 metres away is often not very important. You just care that it's "about 1.1"

So it's horses for courses. You choose the lowest precision and lowest range that works. If a lower precision only produces fuzz or blur (rather than completely wrong results), then you give the user the option to switch down and get a bit more speed.

It's extremely hard to answer this [differences between FX12, FP16 and FP32], because it depends so heavily on the shader being used. I would guess that where I had given the driver a choice between high and low precisions, the visual difference between them would be small. But there are plenty of cases where using low precision produces unacceptable results, and there I would insist on high precision.

Jake Simpson
I would imagine this has a lot to do with color shaders being used on top of shadows. Given the computations you get at the low end of the spectrum when something is in a 'dark' area, precision on the part of the code gets far more critical, especially when you multiply many dark pixels against each other. It only takes one result to be 'zero' because of low precision to make all the rest zero too, losing the actual color you wanted to portray being black. I would imagine the difference between F12 and F16 are greater visually than 16 to 32, in the same way that screen color is.

Tim Sweeney
DOOM is aimed at DX 7-9, so it falls in the former category.

"MrMezzMaster"
The issue is that I should be able to specify in a shader language the precision I need and have the rendered output, with little variation, be the same on all drivers/cards. If I code in floating point because it's consistent and that's what I need, I could care less what the driver/card is doing under the covers as long as the rendered output is correct. Does that mean that the compiler in the API and/or driver may make "aggressive" optimizations to fixed point? Maybe. If I don't lose any visual quality, and it's not a performance hit, then I don't care that much.

There are many instances where you will see differences between the different floating and fixed formats; there are even more than you list. I want to write one shader that runs on all drivers/cards and that meets the quality I expect from the shader. I don't want to go back to creating individualized per driver/card fixed-function pipeline sets for rendering because every driver/card is so full of bugs.