Editor's note: This article originally appeared at Kristof's own site in July of 1998 and is reprinted here by permission with minor formatting changes.

Introduction

Again the discussion about what bump mapping technique has what advantages/disadvantages fires up. Funny thing is that it is usually an ex 3Dfx employee who starts it. The first time it was Brian Hook and this time it is Garry McTaggart (also ex 3Dfx if I am correctly informed) even funnier is the fact that they bash on the PVRSG perturbed normal technique and seem to like the embossing technique proposed by 3Dfx. So what is it all about...

PowerVR Second Generation supports a Bump Mapping technique that works as follows:

  1. Take a height map as input - this would be a file that contains numbers that correspond with a certain heights.
  2. Internally this height map is translated into a slope map. This means that the slope is calculate along the UV parameters (the x and y parameters of the texture and bump map). This is done quite simply by taking the height values and subtracting them from each other to indicate the change in height in u and v directions (of course normalised). These perturbations give the change of the normal relative to a normal perpendicular to the base polygon.
  3. Now when doing the light calculations you do a dot product between the light source (direction and intensity) - the normal of the plane and the perturbation in u and v directions. The result is a changed light intensity calculation that takes into account the bump map (through the slope values).

Now what is McTaggart's problem?

There are an awful lot of articles popping up on bump mapping with hardware and multitexturing in general. One thing that seems to be overlooked in these articles is the downside to the method in which the PVRSG (PowerVR) does bump mapping. It only chooses a normal with which to bump against once per poly. The SGI shift/subtract embossing method actually interpolates the bump across the face where the normals are decided per vertex. The PVR method will definitely give artefacts (unless there's some hack that I don't know of to work around this) on the boundary of polygons where the normal changes abruptly. What you really want is a bump normal that is effectively interpolated across the surface. The embossing method doesn't exactly interpolate the normal, but it does interpolate the shift values for the embossing giving the appearance of a smooth bump mapped surface.

Now if I understand his problem correctly than he is worried about the artefact that can pop-up at the border of the polygon where the sudden change is made between the base normal of one polygon and the other base normal of another polygon:





Now is that a problem... well yes if we assume that PVRSG does its perturbation relative to one normal per polygon then there is a problem at the edges. This is similar to using flat shading instead of gouraud shading. If you do not use any bumps at all so you just use the same normal to do all your light calculations than it is easy to see that we would be in the flat shading situation where one light intensity value is obtained for shading (the same light calculation is done for each texel) : the whole polygon gets a constant shade. The polygon next to it has a different normal so it will also have a different light intensity value so you will see a sudden change in shading colour between the two polygons:



Now that is indeed a problem because that would mean that at the edges of polygons you would get weird artefacts.

Now the big question is: "Is there a solution ?".

Well what is the difference between flat and gouraud shading?
Well gouraud determines the light intensity (normals) at all corners of the polygon and does an interpolation between them. Flat shading only takes one lightintensity for the whole polygon. The result is that the light intensity values for polygons lying next to each other is the same at the edges since the same values are used (mutual vertices) for interpolating. Now can this be combined with the perturbed normal bumpmapping method?

Well yes... the perturbation calculated is relative to a normal . Nobody said whether the normal from flat - gouraud or phong shading is used. If PowerVR is doing gouraud shading it is using different light values (thus normals) at the corners of its polygon thus why assume that a constant normal is used for perturbation ? If we use a interpolating normal over the whole surface and perturb that than all our problems are solved : the edges would have the same light intensities (same normals used since same vertices) and the result of a bump map that contains no bumps would be a normal gouraud shaded polygon. Adding bumps to it would introduce no artefacts at the edges at least if your bump maps fit together properly at the edges. This technique would require the normal at all the corners of each bump mapped polygon instead of one normal per bump mapped polygon as was assumed previously:





A couple of reasons why this is probably the case:

The CGDC presentation mentions:

"... For example here the normal is perpendicular to the polygon for all the texels ... "

In their presentation they use a single isolated polygon so the base normal is always perpendicular to the surface, it simplifies the explanation and the move to full 3D is quickly explained by saying that normally an interpolated normal is used between the vertices. Especially the "for all"-part of the sentence is important. Further the CGDC presentation also says:

" ... Note that if Phong shading is being used at the same time then the normal that is being perturbed would not be perpendicular to the surface, but this does not change the method as the supplied values are a perturbation and not an absolute normal..."

This last sentence proves that VideoLogic knew of the problem but they have taken a giant leap to Phong shading instead of Gouraud shading in their explenation. Phong shading would supply true normals at each texel but Gouraud does an approximation by doing an interpolation for every texel.

Also Simon Fenney from VideoLogic wrote on the usenet (concerning compatibility with DX6):

"> Doesnt this method require a knowledge of normals etc. by the hardware?
> (How can you pass the light direction in the current API set?)
>  And if so, then I believe this method will *not* be supported by DX6,
> but instead will be supported by DX7 at the earliest.

In the (flexible?) vertex definition of DX6, there is a direction vector.
This can be used to specify the light direction RELATIVE to the surface
orientation at a vertex.

The surface orientation could be defined by the usual normal at the vertex
along with two extra mutually perpendicular (unit) vectors that lie in
the 'plane' of the surface... "