Ok, now we know how we can improve the color depth from a dithered input and we know it also has some disadvantages. But can we really use these 2 by 2 filters in a hardware 3D accelerator? Well probably not. The problem with these filters is that they are 2 dimensional. You need information that is spread out in both the x and y direction. Now we want to apply this effect just before the RAMDAC. This RAMDAC uses linear information, or more precisely, it reads an image from the top, left to the right, bottom. Also, it reads the data line by line, as this is how a monitor works. Now 2 dimensional filters don't fit into this line based system. If you want to use a 2 by 2 filter (or even a 3 by 3), then we would need a massive amount of cache or very efficient memory access to be able to fetch all the input data for our filter.

One way to solve this problem is to use a 1 dimensional filter instead, and I am pretty sure that that is what 3dfx does. So what changes? Well not much. Instead of using pixels surrounding the actual pixel, we only use pixels to the right and the left of the pixel we want to show. So when we want to find the 24-bit color of a pixel at position (10,10), we will also use the color of the pixels at position (9,10) and (11,10). We don't use any info from the line above or below anymore (Y value stays constant).

Is there any extra risk involved with this change? Well yes. By using a 1 dimensional filter you only filter in one direction, the horizontal direction. This can introduce new unwanted artifacts. Do any of you still remember the complaints about weird horizontal artifacts on Voodoo 2? Some time ago people complained about this on the Usenet... the funny thing was that nobody was able to capture a screenshot containing this effect. The reason for that is simple: these horizontal effects are not present in the frame buffer (used by the screen capture program), they are introduced by the one dimensional line filter used by 3dfx.

What happens is that details like with the 2 dimensional filters gets spread out, but this time this "spread out" is one dimensional and in the horizontal direction. The resulting problem is that small details are translated into small lines instead of small dots! Lets show this with some examples.








The top image shows a texture with dots. Now, I am aware that this is a highly artificial texture, but this is just to clearly illustrate what happens. Notice the second image (normal dithering looks identical). The 3rd image is the 2 dimensionally filtered version, the dots have turned into a gray broad vertical line. The 2 last images use a 1 dimensional filter and they also show gray small lines instead of dots. Lets zoom in to see what happens.









The filters spread out the detail. In the case of the 2 dimensional filter, this happens in both the X and the Y direction. In the 2 last images, it only happens in the X direction because the filter only works in that direction. The last 2 images show how simple dots are turned into small lines. Now if there are enough dots like this, you might end up with faint horizontal lines on your screen.

Some might say that this is far fetched, but people have been complaining about this. In real textures, there are often noise like effects and these noise dots can turn into unwanted lines because of the color filtering process. Sounds like an unwanted artifact to me.