So with the rack product and the desk-side supercomputer, will you supply the interface board to the host for the external cabling, or will that be something you have to buy elsewhere?

Basically it's our engineering from the PCI Express connector back, so we engineer and supply everything there to connect those products to their host systems.

And what about situations where you don't have Gen2 on the CPU host, what happens there?

The whole thing is backwards compatible, and at the switch on the interface board everything is always Gen2 behind that anyway, it's just possibly Gen1 on the host.

As far as planning server space core logic where you guys are going to supply the Gen2 host interconnect. Is there any extension of nForce there where you're providing a Tesla-only core logic there for connecting those products in the professional or HPC space?

There's a roadmap on the nForce side that might address those segments, but I can't speculate there.

But since you guys are also a core logic company, you could theoretically support Tesla with your own silicon there in the pro or server or HPC space as the host core logic?

Yeah, and there are places we do add value throughout the chain like that, say with SLI, but I can't talk about nForce there specifically. We still have to work with the other chipsets that are out there, and certainly not all servers will run nForce, so we still need to have maximum performance with those guys and those products with Gen1 and Gen2 and whatever there exists in those spaces.

So what's the leverage with SLI with Tesla?

So we do have some there, but SLI is primarily a technology for graphics acceleration and rendering, and it's a way of having two GPUs cooperate on one graphics workload. So we do multi-GPU with Tesla, but we call it multi-GPU because it's on the host, not over the SLI connector.

Right, and the API already supports that there with CUDA, and you just enumerate every device connected to the host that supports CUDA and is available

Right, the runtime and low-level parts there are already setup for that.

So we looked at the Tesla board PCBs earlier and if you consider a G80 GeForce you have two SLI links there on the PCB, but if you look at the Tesla board you have one of those connectors and the second one is different, with a different set of pins. So we're wondering if that's for something Tesla specific.

Not currently.

You and I talked a little bit yesterday about using the rackmount version in offline video processing in the movie space. Do you have any firm commitment for any customers in that space, for a big Tesla-based offline DCC farm?

I would say we have a lot of very strong interest, but we're not announcing any customers in that space yet. So the process in that space is that there's some software development that has to happen up front before a customer like that will commit to the hardware, so it's nothing we can announce just yet.