3DMark05 highlighted a new stage in benchmark applications by virtue of being such a large download.  Do you expect this pattern to continue over the next several versions of 3DMark or do you believe that a reasonable balance can be achieved, producing a limit to the final package size?

I am not sure how big the next 3DMark will be, but I am certain it will be bigger than 3DMark05. I mean, game demos are soon over 1GB (or aren’t they already?), so I don’t see the file size as a direct problem. These days the majority of the users have DSL/cable modems, so the file size isn’t that much of an issue. In every 3DMark we have such a huge amount of content, that we it will helplessly be big. The next 3DMark will be bigger than 3DMark05 since it is a much bigger benchmark looking at it from any angle. Of course we do our best to keep the file size at a reasonable level, but every bit has to be there. We won't cut corners to get the file size smaller.

There has been a growing difference between frame buffer sizes at the low and high ends of the market - this obviously causes problems when planning a unified benchmark that can stress cards with lots of memory (256MB+) whilst not being overly limited by the 128MB low-end models; what options have you looked at to address this and which do you feel are the most practical?

Well, the next 3DMark will require 256MB of VRAM for all the graphics tests, so I don't think we will encounter that problem. Of course users with 128MB cards will be able to run it, but the minimum requirement in order to get a valid result is 256MB (i.e. without having to get swapping from system RAM <-> video RAM). The next 3DMark is targeted at high-end hardware.

The Xenos graphics processor in the XBox360 offers a good impression as to how graphics card may evolve towards in the next few years.  Does any of the 3DMark development team have experience of using this system yet?  Do you have any plans, as part of R&D for future projects, to experiment with a XBox360?

Consoles are not a target platform for 3DMark, so the answer is no. We are of course interested to get our hands on all the new consoles to play games on them, but developing 3DMark on them/for them is not in our focus. This is not to say that we have not worked on custom benchmarks for some next generation console(s) :)

Geometry has played a big part in the last 2 versions of 3DMark.  The tessellation functions offered within DirectX since 8.0 have not been used at all though (due to hardware support issues); do you expect this trend to continue for the next few releases or is there sufficient evidence around to suggest that it might be worth exploring/using the functions next time round?

Currently we don’t use any of the tessellation functions found in DirectX since there is very limited hardware support for those. Hopefully this will change in the future.

Over the years, the amount of features and functions available to use in DirectX have increased substantially.  Games, on the other hand, have generally been rather slow to pick up and use them; however, there has been some notable exceptions of late and there would appear to be a concerted effort amongst the game development community to support these functions in order to better promote their games.  What kind of a challenge does this present to you, in terms of finding the right balance between exploring the available techniques in a new API and only using those which are most likely to appear in actual games?  For example, we've seen many games display tone mapping, parallax bump mapping, etc but not vertex textures - yet they're all modern routines worth using.

It is a tricky thing to balance the benchmark to represent games and introduce new effects & features, while not driving it towards being a “tech demo”. The features and functions we use in 3DMark are things we have discussed about with our BDP members, and therefore we are very confident that our choices are correct ones. Of course we push the hardware, and use of features, a bit more than games do in general but that’s what a benchmark is supposed to do. As an example 3DMark05; we limited the use of the benchmark to SM2.0+ compliant hardware and pushed the SM2.0 to its limits. That was a decision we had to make, and we think it was a correct one. We need to move forward in order to give the users the most out of their hardware. To enable the benchmarks to support all older generations of hardware would not only create an amazing amount of more work for our programmers and artists, but also limit our imagination. I like to think that the 3DMark series have a pretty good and accurate track record in using feasible and useful features. Couple of examples from the top of my head: the pixel shaded water (introduced in 3DMark2001), stencil shadows (3DMark03) and perspective shadow maps (3DMark05). Those examples have been used in multiple games, proving that what we use in 3DMark is something that can and will be used in games too. Of course things change by time and we move forward to new techniques, but still we have been able to show gamers what effects & techniques to expect in future games.

In the next 3DMark we will again introduce a great set of new features and effects I believe no shipped game to this date has used. Some effects are the same as we used in 3DMark05, but vastly improved both in terms of efficiency and quality. Whenever we create a new effect for our products, we test it very thoroughly to see how far we can take it. If it seems that it is severely limited, then we usually drop the effect. The reason is that we have high requirements on anything we do. It needs to be robust and work in other possible projects as well. Another important point is that any effects we use in 3DMark should be useable in any games as well.