API assistance for post-filter MSAA, and EAC

It's known that the D3D10 component of DirectX 10 mandates multisample access by the application developer so that he or she can program the down-filter themselves. That lets them retain control. And since they already know in advance the state the resolved pixels should be in, with regards to properties like their orientation on an edge or their colour space, they'll always build a better filter than the IHV will with a choice via the user control panel.

However the D3D10 approach has limitations, not least because you don't have access to the multisample locations the hardware will take. So one change moving forward would be to make that information known to the application via the API. Looking to the future, we can imagine something like tagged multisample buffers, where on submitting geometry to the pipeline you can decide on a per-pixel level two things: a) do you want AA performed on that pixel at all and b) which filter do you want to use if you do. Another one or two (compressible) bits per pixel to store is probably acceptable in the long run, and we wonder if D3D11 has something like that on the cards.

What about the pre-filter ideal?

A move towards real-time exact area coverage?

If pre-filtering is the ideal in terms of 3D image antialiasing, as discussed at the beginning of the article, then NVIDIA's CSAA is a step in that direction from a post-filter sense, if that makes sense. The extra depth tests to determine coverage don't give exact area coverage (EAC) by the contributing geometries by any means, but they do determine coverage more correctly. That more correct coverage results in higher image quality when combined with a good down-filter.

The biggest issue with exact area coverage comes when you increase screen resolution and geometric complexity. Screen resolution increasing simply increases the number of pixels and thus your total computational cost. Geometric complexity in terms of more triangles per pixel means EAC gets harder to compute (linearly with the number of discrete geometries contributing to a pixel).

That kind of complexity ruins current early-Z schemes, too. An increased number of contributing geometries per pixel means a likelihood of increase in the number of different depths per pixel. Your depth buffer doesn't compress well in that situation, so throwing away pixels based on Z compression, pre-rasterisation, can stop working as effectively.

So schemes like EAC become less appealing in terms of what you want to build in your hardware, if you see a future increase in geometric complexity and an increase in screen size.

So a move towards EAC for pre-filter AA in hardware might not be on the cards, at least not for desktop hardware. What we're seeing instead on the desktop is a move to better coverage where post-filter AA is concerned, with schemes like CSAA.


Therefore where you might see EAC first is in devices with low numbers of screen pixels, where you also have better control over the API when submitting geometry, something we talked about earlier.

Also think of an environment where you exactly control rasterisation and blending. Vector graphics acceleration for example. EAC might show up first there, and it's where accelerated coverage AA has been pioneered in recent years, in handheld graphics cores.