NVIDIA Teases with Tessellation Performance on GTX 480

Rob Williams

Editor-in-Chief
Staff member
Moderator
Late last week, NVIDIA posted a video to YouTube that showcases the company's upcoming GeForce GTX 480 graphics card, based on its Fermi architecture. In the video, NVIDIA's Tom Peterson explains the perks that the card has, and how it will compare to the "competition", and of course, why you will want one. Tom emphasizes superb tessellation performance, while also explaining how it works, with the help of Unigine's Heaven benchmark.

tom_peterson_nvidia_030810.jpg


You can read the rest of our post here.
 

Relayer

E.M.I.
While I do find it interesting that Fermi seems to handle tessellation much better than Cypress (being able to draw triangles on screen faster is an important spec), I also find it interesting that, other than that, Fermi doesn't appear to be performing any better than Cypress.

Only 2 minutes of one benchmark isn't much to go by though. Why no more performance figures? Seems strange. If they had more positive figures you think that they would show them, wouldn't you? :rolleyes:
 

b1lk1

Tech Monkey
It is classic Nvidia to release as little as possible before a major launch. They want all that suspense and as I state in most threads like this, Nvidia has little to prove to the Green army who are just salivating at the chance to have these cards. They are not a tough sell, no matter the performance and as long as they atleast compete with the HD5870 on a base level, they will sell like wildfire.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
They are not a tough sell, no matter the performance and as long as they atleast compete with the HD5870 on a base level, they will sell like wildfire.

But for the projected price of $200 more? I'd have a hard time believing that they'll sell as easy as you say they will if this turns out to be the case. I could believe it if it was even just $100 higher, but $200 is a lot harder to stomach. Plus, we're not even talking power consumption and temperatures. For all we know, ATI is going to clean house where those two things are concerned.

It's going to be an interesting month, that's for sure.
 

crowTrobot

E.M.I.
A lot of other factors like comparison to older GTX2*** models are also in play like how good it is in Folding, CUDA apps and 3D Vision, 3 things that are exclusively nvidia right now. I don't think nvidia users are concerned so much about power consumption esp those who fold.
 

Tharic-Nar

Senior Editor
Staff member
Moderator
Power consumption is a problem because the PCI-E Graphics (PEG) standard says that a single device can not draw more than 300Watts of power, this is why the ATI 5970 was deemed an under-performer, because it hit the 300 watt wall, but they encouraged overclocking. Thats the difference, they can sell you a product that meets the 300 watt criteria and make it very easy to overclock (and consume more than 300 Watts), but that is also the problem, not everyone will. They could of course let the card consume more than 300 Watt's, but then they couldn't sell the card as a PCI-E compatible card, since it would be in breach of the standard. Also, not everyone folds, nor wants their card consuming 300 Watts while in use with the fan screeching away.

I know, it's doubtful that Fermi will pull 300 Watts (or at least i hope so), but it hardly screams efficient when it's doing the same work as its competitor with a 15-25% increase in power. If power wasn't an issue, we'd be using 3000 Watt computers by now.... and in one case, you could (EVGA's dual socket board fully kitted out).
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
crowTrobot said:
I don't think nvidia users are concerned so much about power consumption esp those who fold.

That's true, but even so, the number of Folders is going to be awfully low compared to those who are purchasing the cards for pure gaming. As for 3D Vision, I don't really think that's a deciding factor for most people right now, given very few even have a 120Hz TV/monitor.

Tharic-Nar said:
They could of course let the card consume more than 300 Watt's, but then they couldn't sell the card as a PCI-E compatible card, since it would be in breach of the standard.

Would NVIDIA even be <em>allowed</em> to sell the card if it consumed more than 300W? From my understanding, the PCI-E implementors forum or whatever it is would step in and disallow it. I'm sure they have a 300W limit for a reason (although to be honest, I have no idea why it matters, given the power is mostly handled by the PSU, not the PCI-E slot).
 

Tharic-Nar

Senior Editor
Staff member
Moderator
For the most part, i believe the limits are arbitrary, it's like putting transfer speed limits on flash memory (Compact flash and such, Class III memory etc). There are probably reasons, largely relating to Manufacturers, since it doesn't really affect End Users.

With the selling of a card over 300 Watt, they would (probably) only stop the manufacturer if they stated that the card was PCI-E compatible, in that, they could not mark in the specifications or box that it used PCI-E, since it would be false advertising. They could probably get away with selling the cards, but it becomes a gray area since the cards would in effect use the PCI-E interface to run, but they just couldn't declare it, and since it doesn't state what connection it does use, no one would buy it without knowing that you could just plug it in anyway.

Of course, they could just sell them as PCI-E 3.0(draft) devices, like with 802.11n (draft), since the PCI-E 3.0 standard has not been finalized (but supposedly due soon-ish). Last i checked, it allowed for up to 300 Watts over the interface, no mention of external power though, so i'd assume it was safe (but of course, speculation as always).
 
Last edited:

b1lk1

Tech Monkey
Irrational fanboyism will dictate the sales of these cards if they are truly $200+ overpriced. If they are closer to the real price that is forecast then enthusiasts will join in the fray. Any way you look at it, these cards are gonna sell because there is a giant market just waiting for them no matter the cost/performance.
 

Envy

Obliviot
It is classic Nvidia to release as little as possible before a major launch. They want all that suspense and as I state in most threads like this, Nvidia has little to prove to the Green army who are just salivating at the chance to have these cards. They are not a tough sell, no matter the performance and as long as they atleast compete with the HD5870 on a base level, they will sell like wildfire.

No offense but you're an obvious Nvidia fanboy. Most people nowadays either get suggestions by proffessionals that know their stuff, or they ARE proffessionals that know their stuff. What I mean by this is that people aren't going to buy the 400 series just because it's Nvidia. They're going to go and buy from AMD/ATI. Why? First of all, It's cheaper. Second of all, nowadays AMD/ATI has about the same performance/a bit better/a bit worse for a better price. Except for maybe the 5970 which is like $700 now right? But it still blows the 295 out of the water.
 

Doomsday

Tech Junkie
i will take power consumption and temps. into consideration! and IF they fail in comparison to ATI, I'll go for an ATI card, or the GTX470+ !! lol!
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Tharic-Nar said:
With the selling of a card over 300 Watt, they would (probably) only stop the manufacturer if they stated that the card was PCI-E compatible, in that, they could not mark in the specifications or box that it used PCI-E, since it would be false advertising.

At that point, I just can't see a release happening, unless it <em>had</em> to happen. I couldn't imagine looking at a specs page for a GPU and not even see it list the interface. The 3.0 spec is one idea, but I'm not sure that would work either. I'm sure there is a lot more to the 3.0 spec than simply increased power allowances.

Envy said:
No offense but you're an obvious Nvidia fanboy.

b1lk1 is an ATI fanboy, actually ;-)

I tend to agree with you on this, because I just can't see people, no matter how devoted to NVIDIA they are, going out to spend 50% more on a GPU that performs like 10% better than another. Yes, there are a LOT of NVIDIA fanboys out there, but I'd have to imagine that they pale in comparison to the number of regular consumers, who have no brand preference.
 

Envy

Obliviot
At that point, I just can't see a release happening, unless it <em>had</em> to happen. I couldn't imagine looking at a specs page for a GPU and not even see it list the interface. The 3.0 spec is one idea, but I'm not sure that would work either. I'm sure there is a lot more to the 3.0 spec than simply increased power allowances.



b1lk1 is an ATI fanboy, actually ;-)

I tend to agree with you on this, because I just can't see people, no matter how devoted to NVIDIA they are, going out to spend 50% more on a GPU that performs like 10% better than another. Yes, there are a LOT of NVIDIA fanboys out there, but I'd have to imagine that they pale in comparison to the number of regular consumers, who have no brand preference.

Yea, I mean sure Nvidia makes great cards but that doesn't mean I have to devote myself to them. I prefer bang for your buck. I mean its like brand clothing today. Who cares so much about what you're wearing?
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
That's a fair comparison, but I'd argue that the brand of clothing could actually affect the design, the material, the overall quality and so forth. In that regard, you'd be actually paying for an improvement. That wouldn't be the case with these GPU's though, if you are knowingly paying more for less.
 

Kougar

Techgage Staff
Staff member
I would imagine the 300watt figure is intended to be a safety limit as much as a design standardization to aim for. Entire Quadcore computers run with 300W, you might see where I'm coming from if you imagine piping 300W through 8 yellow wires + a few of the delicate gold contacts in the PCIe Express slot itself.... then consider some company exceeding that limit. It's also a question of the power supply itself, the power draw for each PCIe rail is supposed to be within a set limit so PSU manufacturers can adhere to their own specifications when designing their units.

It wouldn't matter for a company like PC Power & Cooling that uses a single, massive 12volt rail, but that's against Intel's ATX specification. Most PSU manufacturers split the 12volt power amongst 4-6 rails, each PCie rail artificially limited to some number I don't recall. If a GPU pulled less power from one rail, and tried to make up for it by pulling a bit more power from another capped rails it would hit the limit and the card would just crash/error even though the PSU had the power to spare.
 
Top