Virtual Texturing explained.

While all solutions either will give you quality loss or produce extra costs to solve the texture problem (all except PowerVR's tile-based rendering process), VT (Virtual Texturing) will give you a smoother texture flow (getting the stress of the bandwidth) and by using smart memory management getting bigger textures in your onboard ram.

A small recap: What problems did we have? First there is the bandwidth problem, and then lack of available RAM. But, we have 64MB cards and soon 128MB cards, is this not enough? Well for current engines it might be. But think of the example I gave; if you would have an engine that did not re-use textures and would use minimal 512*512*32 and a lot of 1024*1024*32 bit textures, 128MB would not be enough.

John Carmack made this clear in one of his plan updates: (I normally would not use pieces of other people's work, as I'm afraid that I would mess up the explanation of the writer. So I have made carmack's plan available right here.

"If you had all the texture density in the world, how much texture memory would be needed on each frame?

For directly viewed textures, mip mapping keeps the amount of referenced texels between one and one quarter of the drawn pixels. When anisotropic viewing angles and upper level clamping are taken into account, the number gets smaller. Take 1/3 as a conservative estimate.

Given a fairly aggressive six texture passes over the entire screen, that equates to needing twice as many texels as pixels. At 1024x768 resolution, well under two million texels will be referenced, no matter what the finest level of detail is. This is the worst case, assuming completely unique texturing with no repeating. More commonly, less than one million texels are actually needed.

As anyone who has tried to run certain Quake 3 levels in high quality texture mode on an eight or sixteen meg card knows, it doesn't work out that way in practice. There is a fixable part and some more fundamental parts to the fall-over-dead-with-too-many-textures problem.

The fixable part is that almost all drivers perform pure LRU (least recently used) memory management. This works correctly as long as the total amount of textures needed for a given frame fits in the card's memory after they have been loaded. As soon as you need a tiny bit more memory than fits on the card, you fall off of a performance cliff. If you need 14 megs of textures to render a frame, and your graphics card has 12 megs available after its frame buffers, you wind up loading 14 megs of texture data over the bus every frame, instead of just the 2 megs that don't fit. Having the cpu generate 14 megs of command traffic can drop you way into the single digit frame rates on most drivers."

And: "The primary problem is that textures are loaded as a complete unit, from the smallest mip map level all the way up to potentially a 2048 by 2048 top level image. Even if you are only seeing 16 pixels of it off in the distance, the entire 12 meg stack might need to be loaded."

So we see here another problem. Not only is the size of the texture a part of the problem, but also the way programmers build the drivers for the video card can be a problem.