Mipmapping Textures



Mipmapping essentially deals with having many images. Images with a single texture is fine but in a 3D game, various levels of detail is needed to have the illusion of depth as well as to replicate real life situations where distant objects appear blurry but gets more detailed as you get closer to the object. This is where having multiple images of an object is needed. Mipmapped textures simply means textures with multiple images – textured images that are closest to the resolution for a polygon is selected and used (a series of prefiltered texture maps of decreasing resolution is used, in other words). Mipmapped texturing also helps to reduce an anomaly known as "texture swimming", a situation where textures in the distance generally tend to "move" all by its own as your position changes.

While mipmapped textures do tend to take a little longer to load, the visual quality is very much more impressive compared to standard textures. Smaller mipmapped textures can also be used instead of scaling any one particular texture image (this will reduce somewhat the requirement for linear interpolation filtering).

Polygons with mipmapped textures, however, will almost certainly require the need for texture filtering since Level-Of-Detail (discussed further below) will come into play. Texture filtering basically computes texture element (texel) values. Magnification filters are used for filtering the many pixel fragment values that are mapped to one texel value (in other words, magnification filters are for polygons that are bigger than a texture image), while minification filters are used for filtering the many texel values that are mapped to one pixel fragment (and in other words, minification filters are for polygons that are smaller than a texture image). Where mipmapped textures are concerned, the texture minification filters are required to be used.

The various texture minification filters are (I will use the popular game, Quake3Arena for illustration purposes) :

GL_NEAREST
This is the nearest neighbour filtering, also called "point-sampling". This is the fastest (since it basically only takes the closest pixels in a textured image without any interpolation between pixels) of all the filtering methods but the worst in terms of visual quality.

Click for a bigger version

GL_LINEAR
This is linear interpolation. It is sometimes referred to as "pure" bilinear filtering, without any mipmapping. Mipmapped textures can improve performance by reducing the need for this particular filter since smaller mipmap textures can be used instead of scaling the one texture image.

Click for a bigger version


GL_NEAREST_MIPMAP_NEAREST
This is nearest neighbour mipmapped filter, where the image nearest to the resolution of the polygon is used. Texturing is done with the GL_NEAREST filter.

Click for a bigger version


GL_NEAREST_MIPMAP_LINEAR
This is linear interpolated mipmaps, where the image nearest to the resolution of the polygon is used (similar to the above) except that texturing is done with the GL_LINEAR filter.

Click for a bigger version


GL_LINEAR_MIPMAP_NEAREST
This is linear interpolation of mipmaps, where two images nearest to the resolution of the polygon is linearly interpolated. This is also called "bilinear mipmap filtering". GL_NEAREST filter is used for texturing.

Click for a bigger version


GL_LINEAR_MIPMAP_LINEAR
This is linear interpolation of interpolated mipmaps. It is the same as the above except that GL_LINEAR filter is used. It is also called "trilinear mipmap filtering" and offers the highest mipmapping quality short of using anisotropy.

Click for a bigger version


The last two methods offer the best in terms of visual quality but is also the most expensive in terms of performance; however, with the advent of 3D hardware with multiple rendering pipelines, the performance hit generally associated with using bilinear and trilinear filtering can almost be indiscernible.

I mentioned anisotropy, which is actually related to both bilinear as well as trilinear filtering – you can have bilinear anisotropy as well as trilinear anisotropy. When an image has a direction that makes an angle less or greater than a right angle with a specified line or surface of reference (where you’re currently standing, for example), all of the above-described texture filters will blur that particular image – it has to do with "planting" the image within a "texture space". The "footprint" of the image in the texture space can be square, or it can be anisotropic (long and perhaps narrow). When the latter is the case (long and narrow), all of the above texture filters will severely blur the image. Anisotropic bilinear or trilinear filtering serves to reduce this blurriness. The "anisotropiness" is a constant between 1.0 and 2.0 (although in theory this can be higher although there is no guarantee that it will work well, if at all). 2.0 should be standard for any 3D hardware that supports anisotropic filtering and should also be the one to be used by any programmers of 3D games. A particular texture’s maximum degree of anisotropy is specified independent from the texture’s minification and magnification filters but the best implementation would be to have trilinear minification (LINEAR_MIPMAP_LINEAR), a linear interpolated magnification plus a 2.0 anisotropic degree. Anisotropic’s OpenGL extension is a relatively new one but with more hardware as well as game support anticipated, it may gain in importance when it comes to providing better visual quality.


Click for a bigger version

UPDATE
At the very last minute, I received a 1024x768 screenshot of Quake3Arena running with 64-tap Anisotropic Filtering. The "tap" refers to the degree of anisotropy - it has to do with the number of texture elements (texels) or samples of every frame taken. This, I believe at the time of this article going live, is the first screenshot of Quake3 running with 64-tap anisotropic filtering. The player location isn't exactly identical to the shots above but it should suffice in terms of "getting to know what anisotropic filtering is about".

Click for a bigger version

Lastly, OpenGL allows generation of mipmaps automatically (using the OpenGL utility library, glu32.lib) based on a single high resolution texture. The problem with this, and this depends on specific instances, is that this is basically a form of scaling and the results can sometimes be inaccurate as well as unsatisfactory. When this happens, manually generating mipmaps is best. Here are example codes of 1D and 2D texture images for such a purpose, where gluBuild1DMipmaps and gluBuild2DMipmaps replace glTexImage1D and glTexImage2D :

/* 1D texture */
glTexParameter(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameter(GL_TEXTURE_1D, GL_MIN_FILTER, GL_NEAREST_MIPMAP_LINEAR);
gluBuild1DMipmaps(GL_TEXTURE_1D, 3, 8, 0, GL_RGB, GL_UNSIGNED_BYTE, roygbiv_image);

/* 2D texture */
glTexParameter(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameter(GL_TEXTURE_2D, GL_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, info->bmiHeader.biWidth, info->bmiHeader.biHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, rgb);