How do games such as Quake2 achieve such accurate lighting then? Well Quake2 uses lightmaps. Lightmaps are precalculated textures that are blended on top of the base texture. These lightmaps contain the light spot casted by the torch. This light spot is then added on top of the base texture representing the wall. The math used to create the light map are very different from the math used for the per vertex lighting, which is supported by the hardware lighting engine. To recreate the same effect using vertex lighting, as obtained today with lightmaps, will require a lot of fiddling by the programmer and artists. Does this mean that lighting acceleration is useless? Far from it, but don't expect that per vertex lighting will replace lightmaps and blending effects. Lightmaps are very realistic, they can be simulated using per vertex lights but its very hard and it requires a huge amount of extra work. Just to start you will need to increase the geometry detail of the wall: many many small triangles need to be used to form a fine maze that represents the wall, on each of the vertices of that maze you change the light intensity but several sources will be needed to form a realistic light spot on that wall. It is better to stick to lightmaps. This is actually one of the reasons why hardware light engines are referred to as inflexible. With lightmaps the game coder has 100% control over how the light source will look, with a hardware engine you have to follow the rules induced by the hardware and this can be very limiting or even annoying (lot of extra work to obtain something simple).

The same usability question can be asked about transformations. Lets take the terrain engines used by racing games and flight simulations. These 3D engines automatically create transformed polygons, they don't use translations, rotation or scaling at all! One simple basic way to do is to use a kind of height-map to represent the landscape. By reading values from that map and using a simple mathematical system its possible to create the landscape without any transformations. This means that those games will see a very very limited advantage from a hardware transform engine. Naturally the cars, planes and objects on the surface do require translations but the landscape doesn't. So again don't expect that a hardware transformation engine will accelerate everything.

The biggest criticism is naturally support. T&L requires special support from the game engine. Only games that use the OpenGL pipeline and the T&L in DX7 will see an advantage from an accelerator that supports T&L. This means that 99% of today's games will still run as slow as before even if you run it on NVIDIA's newest chipset, the reason: they were not designed to be accelerated. As for today's game coders, they still don't have the 100% final version of the DirectX 7 SDK. This means that they can only start to design a T&L compatible game, unless they use OpenGL. A realistic estimate says that it will take between 4 and 6 months before we see games that can take advantage of the T&L engines in the new wave of accelerators. The only exceptions to this rule? Quake2 and Q3Arena… ID software knew that T&L would come sooner or later so they made sure to use a compatible OpenGL engine but then again do these games need T&L if they already run at 30-40-50 fps on today's non-T&L hardware? Where we'll see T&L used is in the next generation of games such as Quake4 and the like.

On a side note: Why is it that all companies use ray traced graphics to illustrate effects. NVIDIA uses a scene from Digital Illusion to illustrate how good lighting can look… they fail to tell that its ray traced light and not at all the per vertex light that their hardware will support… Misleading? 3dfx did the same during the T-Buffer presentation.  They show Disney's A Bugs Life to illustrate how good motion blur looks and they use ray traced graphics to illustrate soft shadows. All these images have a quality way above what T-Buffer will ever produce on their next product… Misleading? We think so. Try to use your own principles and hardware to illustrate how good (or should it be how bad) it really looks.