I guess the first thing we want to ask is "what's the sell for Tesla?". Why would you buy Tesla products when you can run CUDA on any GeForce 8 you can buy in any store? What's the extra?

That's a good question. It's not so much the actual GPU itself, it's the products it goes into. And so if you look at where we're primarly focussing it at, in oil and gas and finance and places like that, it's in large scale rack deployments. So as we were looking at designing those types of products, it was quite clear that the type of board we have to design, and the way it was integrated into a system, it couldn't necessarily have graphics connectors, it doesn't need them. So we evolved down this path of sitting in the customer's seat, rather than buyers or product creation, and we asked ourselves "what does the person looking at a product like this want to buy, what are their requirements?". That's the primary reason, and now, as they look at their product line, we want to evolve a product like that matches their needs. So I think this is a similar state you can see with GeForce and Quadro. GeForce customers expect a certain type of products at a certain time, and with Quadro there's a longer product life cycle because of its use with professional applications, so with Tesla there will be a similar pattern to Quadro.

So will you qualify CUDA apps and support those?

Yeah, so you have something like a professional simulation code, say you talk to someone like Acceleware or Headwave, they qualify their application on a certain set of boards on a certain set of systems. So the customer downstream, they're buying a simulation machine, and it has a certain value and speed and functionality, and in general the workstation product is qualified around the whole package, not just the graphics hardware. It's the same with the server-level product, so we'll have a server level product and we'll have companion CPU servers that we work with, so those suppliers can go, "OK, these set of systems we've tested together, we know these work".

So you'll recommend certain Tesla products be integrated with certain HPC platforms?

Exactly right, because that's what the customer in this space is looking for. And we already do that on the Quadro Plex side, because it can be racked, so we test it with certain servers because it's a professional product line.

So where does Tesla pricing sit with Quadro, especially the desk-side supercomputer and the standalone board, which have equivalent Quadro versions?

Again, you have to view it two different ways, because when you're buying a graphics product there's a certain graphics value that's built in, but when you go in to the HPC business there's a different value there, and it's set by the pricing and performance and not just the the GPU itself. So it's priced relative to other products based on value and the work we have to do to bring it in to that space. That's how you see the pricing derived. We don't view Quadro vs Tesla, since they're really totally different customers and we'll be working with different IT managers for both, since it's a set of different technologies say when it comes to visualisation versus HPC.