Voodoo3 specific comments:

3dfx uses a special technique to improve the image quality. This technique relies on a post-filter operation that is located between the frame-buffer and the RAMDAC. The technique is described in three previous articles that can be found here, here, and here. As a result of this technology, the data found in the frame buffer is not equal to the final data that is being written to the screen. The frame buffer contains non-post-filtered 16-bit data; on-screen you get up sampled post-filtered 22-bit color. Due to this fact, a special application of software is needed to capture the true quality. At this time, only Hypersnap V3.30 supports this special post filter operation, but it requires installation of a special new DLL. This is necessary because the original DLL delivered with version 3.30 contains the post-filter of Voodoo2; only the new DLL contains the updated Voodoo3 post-filter. The DLL has only been available since today(ok, for a bit now.. the article was delayed), and as a result almost all screenshots available on the net are invalid. They are either non-post-filtered or post-filtered using the Voodoo2 filter. In both cases the result is not representative of the true output of Voodoo3. There is a very simple way to see whether the image is correct or not. In the worst case, if the image contains very obvious dithering patterns then the image hasn't been post-filtered. If the image contains weird horizontal artifacts (lines alternating darker and lighter) then the image has been post-filtered using the old Voodoo2 filter. Correctly filtered images should show very faint dithering patterns, and the difference with the previous two cases should be very obvious. You can find the Hypersnap screen capture program here and the new Voodoo3 DLL here.

Various theoretical factors influencing image quality:

Color resolution is what we know as 16/22/24 bit output. The basic link between image quality and this factor is very simple; the more bits you have, the more different colors you can have. These bits are spread out over the 3 color components, Red, Green and Blue. Usually Green gets a couple of extra bits due to the fact that the human eye is more sensitive to shades of green. Actually, the human eye is pretty weird and non-linear. Let's illustrate this with some samples that are related to the color depth. During tests, scientists discovered that humans can detect a contrast that would need 64-bit to be described using normal RGB components. This is an impressive achievement. However, this does not mean that we need 64bits for continuous color effects. The human eye is very sensitive to sudden changes and contrast differences. During these special tests you had two neighboring large surfaces with these slightly different colors. In real images you have these same subtle differences in color, but we are unable to detect them. Why is that? Well the brain is a remarkable thing, and it can adapt itself for increased efficiency. In real images, you have millions of different colors and the brain can't keep track of them all, so the amount of color detail we see is greatly reduced so our brain can handle the data. Our basic sensors (part of the retina of the eye) are very sensitive; lab tests have revealed that these sensors can even respond to a single photon (1 light particle), but because this sensor info is processed massively parallel by our brain, so there is a loss in detail. The brain tends to "average out" the similar data, and keeps better track of the more important stuff such as edges. Basically, we notice sudden changes very fast, but slow and smooth changes tend not to attract our attention. We have to be focusing in order to notice the change. What does all this mean for image quality? Well the main things to avoid are sudden unwanted changes. You don't want something that is red to suddenly turn into something blue. In average situations 24 bits is enough, mainly due to the fact that we are experiencing live moving images on the screen. This means that color information is not only extracted from one image but from a couple of images. The human visual system can analyze about 30 different frames per second. If the accelerator generates more than 30 fps, then several of them will be combined in the brain and color will be interpolated in time. A similar interpolation is also handled in space meaning that pixels close together will blend together due to a lack of resolution of the eye. The eye has high resolution in the center and poor resolution at the edges, but our brain makes sure we don't really experience this resolution change. The dithering algorithms also use this blending effect. Dithering can increase the perceived color depth and time interpolation can also help, which is another reason for high frame rates, but the basic idea remains the same. Higher is better. How high is high enough? That's rather hard to say. It really depends on the situation.

Textures are the basic input needed by the rendering core so it's only natural that the resolution of these textures (think of resolution as the amount of detail, the higher the more detail) has an impact on image quality. Many companies are shouting very loudly about their high-resolution texture support. High-resolution textures are mainly important when you get close to a polygon because you'll need a lot of texture detail to maintain image quality when looking that close. Its pretty logical actually. Let me illustrate with another example. Assume we have a 256x256 texture and this texture is then applied to a square in 3D space. If we assume that our polygon is rather far away, then we could conclude our square might take only 25x25 on screen pixels, since farther objects appear smaller. Now assume that this square starts to move towards the camera. It will get bigger and take in more onscreen pixels. At one point it will reach the size when it occupies 256x256 on screens pixels. Now as the square get even closer to the camera, it will continue to get bigger, up to 512x512 and higher. These steps should make it clear that when the polygon is far away you need very little texture detail, but when the polygon is very close you need a lot of texture detail. This effect is used by something called MipMapping. The concept is very simple; MipMapping involves applying multiple resolution versions of the same texture to an object based on how closes to the camera it is. From the example, we know that the closer we get to an object the more detail we want. So having a high texture resolution is important when you get very close to objects. Now the real question is does this really matter in games? When you play a game you tend to stay rather far away from most objects, so in maybe 90% of the cases you don't need high-resolution textures, and the lower resolution MipMaps are sufficient. But the high-resolution textures are still needed for instances when you want to examine something closely, such as a sign on a wall.

The Voodoo3 is limited to 256x256 textures, but is that really enough? Well, if you play a game at 640x480 then it's pretty obvious that the average object that comes very close to you will not need much more than 256x256 texels of info. In fact if you stretch one such texture to cover the entire screen (say you're looking at a wall that covers your entire screen), then you have a factor of only 2 pixels to 1 texel. That's really not bad. What about higher resolutions? Higher resolutions are a bit like a zoom; you get more and more pixels to show the same surface. Logic tells us that 256x256 is probably not a high enough texture size for decent quality in resolutions like 1024x768 and higher. Naturally, much still depends on the game and how the graphics are constructed. There is definitely a trend towards more and smaller polygons, as smaller polygons with different textures tend to stay small even when you get close, so 256x256 might be enough. On the other hand, old games tend to use large polygons (in the old days polygon throughput of the CPU was very low) and that tends to leave you with highly zoomed textures.

It is important to note that there are some tricks to get support for higher resolution textures on 3dfx Voodoo3, but it requires splitting of polygons and textures. The technique is possible, and there are plans to have Unreal Tournament support it. This manipulation is also possible on driver level, so there are a few ways around this hardware limitation.