Question 11

Beyond3D : Can traditional renderers overcome the problems that Tile-Based solutions solve through their structure? Like the rendering of 8-layer multitextured pixels that will never be visible, the horrible memory access patterns with smaller and smaller polygons (where many OD algorithms like early Z and hierarchical systems also fail to work effectively), the huge memory abuse when doing AA, costly stencil procedures, expensive memory-readback when doing multi-pass effects? Its easy to nag about the buffering issue that tilers might or might not have in the future, what about all these bottlenecks in traditional systems? Not to mention the issues when using more accurate frame buffers, 64bit floats?

Croteam : Tile-Based rendering can be a very good solution for the present time. But I don't think that it will hold much longer. Brute-force approach with its power, has already hit the limits of the monitor resolution. It all comes to two things: either developers will completely embrace tile-based rendering and adopt their engines to that, or we'll all stick to simplier brute force solutions. Tile based rendering will always be faster than brute-force, but who needs (potential) complications of TBR, when brute-force approach is already fast enough.

MadOnion : The technology is not as important as the end result. Currently best results have been achieved with the "traditional" 3D accelerator types. Both can be made to work, but a tile based system may probably be more cost-effective in the long run.

Basically, game developers could not care less how it's made if it renders fast, has good feature set and they don't need to think about any special cases.

NVIDIA : There are pros and cons to any architecture. I believe in the future, we are moving toward more and more geometry, as well as more and more per-pixel shading and computation. Both of these directions require more muscular and powerful pipelines. Tile-based renderers don't address these needs. The optimization provided by a tile-based renderer is that occluded pixels that don't contribute to the final picture also don't contribute to the bandwidth consumption at the memory. In the limit, a tile-based renderer optimizes out that redundance, and provides only a minimal impact to buffering and re-scanning of command streams and geometry. In the limit, a conventional renderer with occlusion culling has exactly the same performance. In each case, we have separated the visibility (what is on top) from the shading.

Question 12

Beyond3D : Higher Order Surface support in DX8 - are you happy with the types offered by Microsoft for games? N-Patches - are they good or bad?

Croteam : See answer to question 6. :-)
(Croteam's answer to Question 6 was "Errr ... sorry, but I'm not familiar with DX8 features (yet!). I'm still knee-deep-in OpenGL. :-)" - Rev)

MadOnion : N-patches are relatively easy to take into use, the others are harder. We haven't looked at HO surfaces too closely yet, but in overall the feeling here is that they could be better. Subdivision surfaces seemed to be the anticipated feature over here.

NVIDIA : N-Patches are just wrong. They don't guarantee that adjacent patches touch; how stupid is that? Who wants gaps in their characters? Cubic or higher order Bezier or Bspline patches both can guarantee that adjacent patches touch and can be rendered without gaps. The only advantage that I can see with N-patches is that developers can convert older polygon datasets to appear smooth (and holy) without too much work. So, I guess it makes sense for low-budget games, that are re-using old content.

Question 13

Beyond3D : XBOX unified memory structure : great feature or the weak point ?

Croteam : For 64MB of RAM - weak point. For 128MB - great feature! Because the framebuffer and the zbuffer eat off your main RAM, and you need fast main RAM to make the video chip run fast, and that raises the price of the complete machine. But it is good since it eliminates the AGP bottleneck. Once upon a time, Amiga had UMA, and it was its main source of supremacy over the contemporary PCs in the graphics speed.

Of course, I assume that general speed won't be the speed of the slowest component (like in today's 'integrated chipset' solutions). :-)

MadOnion : How could it be a weak point? It is a great feature. It makes the XBOX memory management easier from the developers point of view and probably also from Microsoft's side.

And looking at current / upcoming games, 64MB of memory is very little to begin with. If that would need to be split somehow, it would limit game developers further. This way XBOX gives flexibility where it belongs - to the developer.

NVIDIA : A unified memory architecture presents both challenges and opportunities. If developed poorly (hopefully we didn't do that), it's a burden on both the CPU and GPU performance. However, a typical GPU memory system is FAR higher performance than a typical CPU core logic memory system. So, it should be a net gain. Also, in the conventional PC system architecture, data must frequently be copied from one memory pool to another (system memory, AGP memory, GPU memory), so that for any procedural data generated by the CPU, you waste a lot of bandwidth as it crosses the bus multiple times. I think that when you net everything out, unified memory is a big win, because it creates optimization opportunities that aren't there, otherwise. Poorly written games can hurt themselves, however!

Question 14

Beyond3D : Can you tell us (based on your own experience) what is the easiest and the hardest part of game development, from starting to think of your next game to actually programming for it to "the story" to "technological consideration", etc, etc. and why?

Croteam : Easiest part are ideas. Everyone has tons of ideas. It's no problem. Problem begins when you have to start shaping that ideas and putting them in the game. :-) But ... the hardest part is building an engine.

MadOnion : The easy part is to figure on the concept level what to do. Coming up with a storyboard & ideas are easy.

The hard part is maybe evaluating what technology will be available in the future (when the app is done), and planning the technology abstraction so that it can cope with the future changes. Technology changes but projects should not take development hits because of them.

Other difficult parts are preventing feature creep during development, and having good change management. If these aren't handled well, it will cause the last hard part: keeping the product in schedule :-).

NVIDIA : I'll answer this, even though it clearly applies to developers. My feeling is that the best way to identify the hardest part of game development is to identify the part the programmers don't do well. And, in observing the games and engines that people create, it appears that writing scalable content is very difficult. Developers do not typically do a good job of targeting a range of hardware. This is relatively straightforward to do by designing lighting/shading models that have fallbacks, and building models of various geometric level of detail, but it increases the upfront work, and the investment in art.

Question 15

Beyond3D : What is your "favourite" 3D feature (ie. Pixel Shading, Higher bit precision, Vertex Shading, Displacement Mapping, HOS....)? Why?

Croteam : Higher bit precision should be the next thing in creating more realistic scenes (in the rasterization area). And subdivision surfaces is the way to go in the future - T&L-wise. It is the perfect tradeoff between speed and complexity.

MadOnion : Vertex shaders combined with pixel shaders will be my favourite feature in 12-18 months, hopefully still combined with higher order surfaces. Currently things like rendering to textures, fast HW T&L, good fill rate are nice to have.

But 100% functional drivers for 3D accelerators is probably my number 1 wish.

In the future, maybe displacement mapped subdivision surfaces :-).

NVIDIA : Programmable Shading. It will change the world.