Bigger is Not Better, Says AMD

Rob Williams

Editor-in-Chief
Staff member
Moderator
From our front-page news:
In the tech world, bigger is better, right? Wrong, according to AMD. When asked by a C|Net blog as to why they are not building huge chips like NVIDIA, they responded, "We believe this is [building smaller chips] is a much stronger strategy than going for a huge, monolithic chip that is very expensive and eats a lot of power and really can only be used for a small portion of the market."

Tu shay, AMD, tu shay. That's a valid point, though. Why build a massive power-sucking processor rather than pairing two efficient processors together for the same performance? It can be argued though, that this is a needless argument, because the fact of the matter is, dual-GPU cards are still watt-suckers. This will always be the case unless the architecture is redesigned to allow one GPU to be shut off while it's not needed.

The other argument that can be brought up is that one massive GPU is better than two mid-range offerings, because it will increase performance in all games, not only those that can properly take advantage of a multi-GPU setup. One GPU would deliver the full load of power, while a multi-GPU card may only deliver one-half of its available power. It all varies from game to game, however, and many today will indeed handle multi-GPU setups well.

One things for sure though, with AMD's next-gen dual-GPU offering and NVIDIA's massive single-GPU card en route, next month is going to be incredibly interesting.

<table align="center"><tbody><tr><td>
</td></tr></tbody></table>
"We believe this is a much stronger strategy than going for a huge, monolithic chip that is very expensive and eats a lot of power and really can only be used for a small portion of the market," he said. "Scaling that large chip down into the performance segment doesn't make sense--because of the power and because of the size."

Source: C|Net Blog
 

Kougar

Techgage Staff
Staff member
NVIDIA is going to be in a crunch with GT200 because of this.

TSMC just posted they will have to raise fabbing prices to their customers to offset their own costs... namely NVIDIA.

Ontop of that GT200 is the biggest die size ever... 65nm GT200 verses 90nm G80, and it still has roughly 100mm2 more surface area! There are already strong concerns about how many NVIDIA can produce from a 300mm wafer, and the answer is not a great many. With such a massive chip it only takes a single defect in the right spot to ruin the entire chip, and with only so many chips per wafer... estimates that GT 280 will hit $600 as the launch price sound very likely to be correct.

AMD is poised to price HD 4870 pretty low, and it looks like NVIDIA simply has no margin for a price war. ATI could have a field day if MSRP / performance comes out as expected for both camps.
 

b1lk1

Tech Monkey
ATI is poised to really pounce on Nvidia's lack of foresight in the mid range market where MOST people spend their money. I guarantee if they did some research they'd see that most people do not buy high end cards but instead the middle of the road cards.

I bet that GT280 will be drastically price gouged well over the $599MSRP and we'll see it stay there for awhile. Availability will also be an issue I bet. If ATI can get enough 4870's in the market on time, they will definitely be kicking Nvidia in the nuts for months.
 

madmat

Soup Nazi
AMD can't be serious can they? "Oh, look at us, we're using smaller simpler chips for our GPU" all the while they're hoping that no-one looks at Phenom... If that's not a product contrary to their GPU stance I dunno what is. Personally it just sounds like they're trying to put a positive spin on the failures that were the R600.

Personally I don't see AMD coming up with a GPU that's worth a damn especially if they end up going with multiple chips on a single card. Every physical interface outside of the chip is a point of failure AND a hindrance to clock speeds. I can't see anything but more fail from the boys in red and green.
 

Kougar

Techgage Staff
Staff member
Well, G80 launched the same way without any real mid-range parts. How is this different? They've become just like Intel, top end part(s) first, then launching the rest of their product offering the next quarter.

I didn't realize it but the 8800GTX launched with 9 processing blocks, only 8 of which were enabled in the flagship GTX and a mere 6 in the original GTS. The 512mb GTS only used 7, and no parts ever launched with all 9 blocks active (144 shaders).

From the sound of it NVIDIA will quickly expand from selling GT 260 to even further cut down parts, I'm sure.

RV770 should be ~25% faster than 9800GTX, and GT 280 should be ~50% faster although since NVIDIA missed their original clock speed targets, it's possibly below that. At least this is according to the rumor mill...

I guess our bets are placed... going to be fun to see how this plays out. :D
 
Top