With the increase in surface detail (via high resolution normal maps), texture aliasing has become a problem; are you guys taking this into account with your new tests?

There are some neat ideas and approaches to overcome aliasing in normal map filtering specifically, but they have certain practical issues which need to be solved before they can be widely used. We are currently looking into some and experimenting with them. That’s all I can say for the moment.

3DMark03 = full precision; 3DMark05 = full and partial precision. If one presumes that the next 3DMark will have exclusive SM3.0 tests (or at least make some use of the features), do you plan on staying with full precision by default? SM3.0 obviously requires FP32 support for compliancy but also still supports the _pp modifier.

We will continue to use the approach we did in 3DMark05. We use partial precision where we are confident that it is enough. Mostly such computations deal with color values, but there are also some geometric computations that can be done using partial precision.

The texture to arithmetic ops ratio in the last two 3DMarks has steadily decreased. What do you have in mind for the next 3DMark and what governs such decisions - game developer choice or design choice for yourself? Would there ever be a point where no texture lookups get used for intermediate calculation values?

The next 3DMark will again have an increased arithmetic to texture ops ratio for the SM3.0 tests. Mostly this means performing things like specular shading using math instead of texture lookups, but we already have specific shaders which have no texture lookups at all. The atmospheric scattering shaders are an example.

HDR - this is very much the latest "in-thing" for current and near future games. You have tentatively introduced this in the past two 3DMark versions, so are you planning a greatly increased usage of this rendering procedure? If so, what kind of methods have you experimented with? Games currently employ to methods: "direct" rendering via the use of FP blending and surfaces and "ping ponging", whereby SM2.0 or higher pixel shaders output to a FP surface, which is then reused in a subsequent shader. There is now hardware on the market, from at least two major IHV’s, to support the former but the latter could be used over a wider range of products - which area would you aim for?

Our HDR implementation requires full SM3.0 support and FP16 textures & blending. This means that it won’t be possible to run the HDR tests with SM2.0 cards or SM3.0 cards without support for FP16 textures & blending. We wanted to take use of the new features of the latest hardware, and didn’t want to start making any "fallbacks" to SM2.0. Besides, the shaders in the HDR tests are so complex that they wouldn’t even be possible to get to work with SM2.0 hardware anyway. We never even considered that as an option.

I personally think HDR is a great thing, and urge every developer out there to go for it! If it’s done right, it can open up many new doors of visual treats for gamers. I have read many discussions about HDR on the net, and there seems to be very mixed opinions of what HDR really is/does. Most users think HDR is simply a new "better" bloom effect than what was possible on SM1.X/2.0 hardware. That’s not entirely wrong, but certainly not the whole truth. HDR done right can make the whole game/scene look stunning. In the next 3DMark we wanted to let our artists go wild with the possibilities that HDR gives them. We have some scenes where we are reaching extremely high dynamic values, and it really shows off. Looks cool!

Depth of Field. This effect appeared in 3DMark03 but not in 3DMark05, and has equally not appeared very much in games. When choosing what effects to use in 3DMark what design procedure is used? When looking back at a previous 3DMark to assess how well it has matched game development, what judgment criteria do you use? For example, the use of a vertex shader to extrude the geometry for the shadow volume calculations has not been used in any best-selling game, so when looking back at this routine, how do you compare the original criteria for using it against the reasons as to why it didn't appear in games?

For every 3DMark we try to come up with new effects and ideas that we see as useful in future games. I don’t think we have ever produced any effects/ideas that wouldn’t be possible to use in games.

The DOF (Depth of Field) we used in 3DMark03 was a cool effect and I am actually a bit surprised that it isn’t showing up in that many games. I guess that game-engines support this effect (at least according to their "feature" list), but so far very few game developers have used it. I guess one of the main reasons is that it works best in movie-like scenes and not in a user-controlled environment. As soon as the gamer has control over what’s happening, it is much more difficult to use DOF. In scenes where the developer has full control of what’s shown on-screen, the use of DOF can make huge differences in terms of visual treatment. I hope that there will be more games using that effect since the engines support it. It is totally up to the developers if they find the need and use of it.

In 3DMark03 graphics tests we used the vertex shader to extrude the geometry because that’s in our opinion the correct way to do it in our case. Why stress the CPU with it, when the GPU/VPU can do it, leaving the CPU free for more CPU-only tasks such as AI, physics etc.? I think the reason why we are not seeing games using the same approach as we did is the same as I mentioned earlier – the lack of vertex processing power in the current graphics cards compared to pixel processing. I don’t think there is a right and wrong way to do it, but since in 3DMark we wanted to offload the CPU, we used the GPU to do it.