By now you probably know what the T-Buffer is, if not you must have
missed the tons of articles already written months ago. They were all
based on the same presentation given by 3dfx at the special tech briefings
in San Jose and at ECTS 99 (London). Most of the articles have said pretty
much the same things, with the exception of a few that have offered additional
information (such as the one originally published here). Actually, most
were just a summary based on the PowerPoint presentation given by 3dfx
and the white paper they published. Now, I don't want to put you through
more of the same so I'll try to write something a bit different.
First of all, I will introduce the difference between computer-generated images and images generated using a Video Camera set-up. The next part will explain the basic working of the T-Buffer. Now, those two parts might be a repetition, but I need to get this thing started in some way.
I am sure you noticed that TV still looks much more realistic than the image generated by your 3D card. There are various reasons for this, but for this article I'll concentrate on the fact that movement (animation) looks much better, smoother, more fluent, and continuous on the TV than in computer generated graphics.
The reason for this is actually because computer generated graphics are "too" perfect. A frame generated by your 3D card is a picture, a picture that shows the game world in a very specific situation. This situation conforms to a specific isolated point in time. In the real world an animation of a car passing by is actually a car moving from one position to another position, the car moves through all (infinite number of) positions between those two points. The animation of the car is continuous. A video camera can capture this continuous motion because it records 30 frames per second. Such a frame is a recording of the world during a period of time. Actually, during a time period of (approximately) 1/30th of a second. So, basically, each frame records the world situation during 1/30th of a second. A CCD chip does this by building up an electric charge during this time period. If that period is finished the voltage is measured which is relative to the amount of light that hit that specific pixel cell. So, each pixel is a summation of the amount of light during a period of (more or less) 1/30th of a second.
Notice that computer generated graphics are different. Your 3D card, as I said, only captures the world situation on a very specific point in time. It's like a snapshot, a situation during a VERY small amount of time. It's a bit like taking a photo with a camera using a very quick shutter time. So, instead of a summation over a "period" of time you have a "perfect" snapshot. Let me illustrate this with an example.
Assume we want to show an animation of a fast car passing by. We have a normal home video camera and a photo camera with a very short shutter time. We use both to capture the animation at 30 frames per second. The video camera uses a summation over time for each frame; the camera uses a short shutter time (approaching an infinitely small amount of time, a snapshot). So, on one side we have a video film and on the other side we have a stack of photos. Assume we digitize the video film and the photos and play them back using a computer. The result of that test would reveal that the animation in the video film would look much more realistic than the animation formed by the series of photos. The photo animation will suffer from a stop-and-go-effect. If we compare those frames we would notice that the video film still images look very blurry while the photos look very sharp. The blurry effect is caused by the fact that the video camera sums all the world situations during 1/30th of a second while the photo takes a snapshot at maybe 1/1000th of a second. You could say that the video camera combines a huge number of photos into one video frame. You would actually need to combine 33 photos to approximate a single video frame.