Well, that leads into another question because the biggest growth area of the company appears to be in the handheld segment. It appears that you’ve had an explosion of employees recently as it seemed that you stood at 1900 employees for a long time and now its standing at 2500.

We’re currently at 2600 employees, although I think what happened is we weren’t being diligent about keeping that 1900 number in the external communications current! 100 of that increase was converting our rep firm in Taiwan to a direct team.

...but the growth does appear to be in the other areas rather than the PC desktop part of the business...

We’ve grown some there, but you’re right, we’re at 300+ people in the consumer group and its growing much faster than the rest of the company and we expect it to.

At the moment the business model operates along the lines of taking the high end and trickling the technology down to the low end, even as far down as the handhelds. Is there ever a point where you look at the amount of R&D invested in the high end products – especially when you’re now looking at 160 Million transistor parts from yourselves and 222 Million from NVIDIA, and you’ve already previously acknowledged that your margins will drop from the cost of these chips – and conclude that this is getting a little ludicrous at the high end?

No.

In everyone of our segments, including handheld and DTV, we’re coming out at the high end because what we find is that the high end drives the innovative thinking and it drives the customers desire from a branding standpoint as they like to have the halo around their product line – a Sony or a Mitsubishi or a Qualcomm are going to want that. So, you have to drive at the high end.

I think you’ve got a valid point though, because there’s the high end and then there’s the lunatic fringe and you cross a point there of going to high of the high end. What we’ve looked at from a technical standpoint, in order to achieve that ultra high end is whether its better to take a performance part and figure out a way to scale it up through two parts, as an example, or whether its better to design at the high end and then figure out how to slice it down to meet the lower end markets. We do keep challenging ourselves at an architectural level as to whether you end up creating inefficiencies in the low end because you are designing so high up which end up making you less competitive at the low end. And what we are leaning is that the answer is yes, there are features that we’re designing for the high end and as you move down its very hard to pull them out so we’re looking at features like Hi-Z [Hierarchical-Z Buffer], some of the floating point precision questions and the 3.0 shader model and figuring out how to implement them at the high end and also how to pull some out for the low end, and even integrated, effectively. That’s a challenge.

However, the big thing that I feel, and I came in with this conviction, was that watching us trying to target the mainstream and win was a dying model and so we said we are going to have to arc up. R300 was really the first part where we really opened up the thinking to what you can do to hit performance and hit schedule and relaxing on die size to an extent and I think it helped ATI get back in the game.

At the end of the day though, is there really the desire to continue with that – is the drive there to keep pushing that type of model?

There’s always the debate of who steps down first. I think what’s going to happen is we’re going to hit a power limit, so through other innovations and technologies we have to manage efficiencies. And then I’ve heard some, tongue in cheek, talk that NVIDIA isn’t counting die-per-wafer, but wafers-per-die, and whenever this is the case you’ve certainly crossed a threshold!