Finance: There are multiple GPGPU applications in the world of finance; some are based on more traditional algorithms, such as ‘simply’ pricing all the stock options in the market in real-time, while others are more custom and require either manual coding (rather prohibitive for GPGPU) or automatic code generation (as in the case of SciFinance, as we will see shortly).

This business is just incredibly hard to forecast because of the diversity of the potential applications and customers. Some might not be very scalable (i.e. there’s no point buying more than a couple of Teslas for it) while others likely are and there are clear justifications for large financial entities to pay for ever increasing performance. At the same time, there are a lot of independent hedge funds in the world and so, if they can perceive a clear value from this, there might be a good amount of volume even from fairly simplistic usage models. In a way, that’s a bit similar to the situation for the EDA market.

Adding to the complexity is the fact that many of these entities are highly secretive, making it hard to even understand what is being done with the technology. Even if GPGPU was a huge success in the financial world in just a year or two, we might not immediately know about the magnitude of it or what algorithms are taking the most processing time. It remains, however, an exciting application of the technology both from a financial and a technical point of view.

Medical Systems & Research: If you’re human, and most of us are, then you’re probably going to die in some horrible way one day or another. Strangely enough, it turns out that most people don’t like that prospect, which is why so much intellectual capacity and money has gone the way of medical science in the last century and even before. So that clearly makes it an attractive market, and from a slightly less cynical point of view it’s always great to see cutting-edge technology doing a positive impact on the world and the well-being of its inhabitants.

 

There are a lot of interesting applications of GPGPU in the medical world (even after you exclude visualization, which is an important market for NVIDIA’s Quadro business) including Ultrasound Imaging, cancer research, and obviously Folding@Home. The former two were presented at Editors Day, so we’ll be looking into them in just a page or two and it should then become obvious that this is a substantial opportunity. Another important aspect of GPGPU for the medical world, however, is its impact on biology/life sciences research, so let’s look at that very quickly…

Life Sciences & Astrophysics: The two main fields for high performance computing in the life sciences seem to be genome analysis and molecular dynamics, and there is tremendous interest and research into GPGPU for these applications. There is tremendous interest here, as well as in other sciences such as astrophysics (see: the AstroGPU workshop), given the stunning speedups compared to CPU-based solutions.

We already described one of the molecular dynamics applications last year, and now there are a number of other implementations for different usages. The amount of public progress is very good, but what’s also very important to understand is that many of the problems in these fields are custom: a scientist or researcher wants to test or determine something, and he needs to create a small program to do so. Clearly there are a lot of potential users out there, so this could make a big impact – but whether it will is hard to forecast, as are many other aspects of the Tesla business.

One interesting factor here is that many of these scientists likely won’t bother buying a Tesla. Universities will for their clusters, and that’s very good business, but most individual researchers will likely just use a GeForce or two in their workstation. So that’s not all about money, but very much about the positive impact it could have on the scientific community and the world in general. Of course, it also helps GPGPU by making more people aware of it, and that generates momentum in other areas.

Multimedia Encoding & File Compression: Moving away from HPC and into the consumer-centric applications of GPGPU/CUDA, the most widely known application today is video encoding via Elemental’s ‘badaboom’ media converter. From NVIDIA’s point of view, the goal is simple: convince consumers and OEMs that many of the most processing-intensive and/or time-sensitive tasks they do today can actually be offloaded to the GPU, therefore increasing the amount of money spent on graphics at the detriment of the CPU or possibly other system components. Right now, very little money is spent on the GPU relatively speaking, so that’s clearly a huge financial opportunity – in fact, it arguably dwarves HPC.

It’s not just about video encoding though. NVIDIA is currently running a contest for MP3 encoding, which is obviously less processing intensive but still an appealing use of the GPU. One way we look to look at it ourselves is that if you look at CPU reviews nowadays, the only applications being benchmarked that scale with multiple cores are pretty much always the same. And it turns out most of them could actually be significantly accelerated by the GPU, including many file compression and encryption algorithms. We’ll see what happens on the front in the coming months and years, but there’s good reason for Intel to be worried and to take Larrabee’s GPGPU potential very seriously.

It’s also worth pointing out NVIDIA’s relatively recent acquisition of Mental Images in this context. While we don’t know exactly what their business strategy is there, it’s pretty clear that just about every CPU review today benchmarks high-quality offline rendering. Clearly, there’s the potential for that to become irrelevant in the future if NVIDIA plays their cards right. Whether they will, on the other hand, is another question completely.

Physics & Games: While the number of AAA games being CPU limited seems to have gone down a bit in recent years, there’s still room for improving the level of realism by throwing a lot more computation power at it via the GPU. Of course, that reduces the amount of performance available to 3D rendering, but there comes a point where even the most beautiful image doesn’t cut it anymore if it looks unrealistic in motion; for example, there’s only so far you can go in terms of human/character rendering without doing at least a bit of cloth & hair simulation, and water without fluid dynamics only makes so much sense after a while.

Both PhysX, through NVIDIA’s acquisition of the company, and Havok are dedicated to the concept and it will be interesting to see how many developers buy into it. For other GPGPU tasks in games however, which cannot simply be abstracted by a middleware API, there’s the big problem of the incompatibility of CUDA with Radeon hardware and the fact it’s unattractive to have to write & optimize the code again for the CPU. Two important developments in that area are CUDA’s CPU path (discussed previously) and OpenCL, an open GPGPU language proposed by Apple. The combination of the two (or perhaps DirectX11 Computer Shaders?) would certainly make the concept attractive to game developers.

And just like for the Consumer CUDA tasks described above, the goal once again is to shift value from the CPU to the GPU; in this case, though, it’s much more about dedicated PC gamers than about mainstream consumers. It’ll definitely be very interesting to watch this area in the future, especially given that no one really has any clue whatsoever what’s going to happen (if anything at all).

By the way, while we’re at it: did anyone but us notice that NVIDIA never, ever talks about accelerating Rigid Bodies on PhysX? While it is indeed theoretically possible on CUDA GPUs through rather complex and inefficient means, it’s probably not fast enough for them to bother offloading that off the CPU (even though it’s probably the first thing most people think about when they hear the word ‘physics’). We’ll be curious to see if that changes in the DirectX 11 generation, and what will happen with Larrabee on that front…