NVIDIA's Fermi Goes Pro-GPGPU, Aims to be Faster than HD 5870

Rob Williams

Editor-in-Chief
Staff member
Moderator
From our front-page news:
Earlier this week, NVIDIA unveiled their "Fermi" architecture, one that has a massive focus on computational graphics processors. As far as GPGPU (general purpose GPU) is concerned, NVIDIA has been at the forefront of pushing the technology. Others, such as ATI, hasn't been too focused on it until very recently, so in many regards, NVIDIA is a trailblazer. The adoption of GPGPU as a standard has been slow, but I for one am hoping to see it grow over time.

The reason I want to see GPGPU grow is this. We all have GPUs in our computers, and some of us run out and pick up the latest and greatest... cards that deliver some incredible gameplay performance. But how much of your time spent on a computer is actually for gaming? It'd be great to tap into the power of our GPUs for other uses, and to date, there are a few good examples of what can be done, such as video conversion/enhancement, and even password cracking.

No, I didn't go off-topic, per se, because it's NVIDIA who's pushing for a lot more of this in the future, as Fermi has been designed from the ground up to both be a killer architecture for gaming and GPGPU. According to their press release, the architecture has great support for C++, C, Fortran, OpenCL and DirectCompute and a few others, it adds ECC, has 8x the double precision computational power of previous generation GPUs and many other things. Think this is all marketing mumbo jumbo? Given that one laboratory has already opted to build a super computer with Fermi, I'd have to say that there is real potential here for a GPGPU explosion. Well, as long as Fermi does happen to be good for all-around computing, and isn't only good for video conversion and Folding apps.

Since Fermi was announced, there has been some humorous happenings. Charlie over at Semi-Accurate couldn't help but notice just how fake NVIDIA's show-off card was, and pointed out all the reasons why. And though NVIDIA e-mailed him to tell him that it was indeed a real, working sample, Fudzilla has supposedly been told by an NVIDIA VP that it was actually a mock-up. At the same time, Fudzilla was also told by GM of GeForce and ION, Drew Henry, that launch G300 cards will be the fastest GPUs on the market in terms of gaming performance, even beating out the newly-launched HD 5870. Things are certainly going to get exciting if that proves true.

nvidia_fermi_die_shot_100209.jpg

"It is completely clear that GPUs are now general purpose parallel computing processors with amazing graphics, and not just graphics chips anymore," said Jen-Hsun Huang, co-founder and CEO of NVIDIA. "The Fermi architecture, the integrated tools, libraries and engines are the direct results of the insights we have gained from working with thousands of CUDA developers around the world. We will look back in the coming years and see that Fermi started the new GPU industry."


Source: NVIDIA Press Release
 

Doomsday

Tech Junkie
Finally! something to be excited about from the Nvidia's side of things! Though the last paragraph is somewhat disturbing bout the 'mock-up'.....BUT, cant wait for the G300s!
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
I agree on both accounts. I was starting to get a little worried about NVIDIA. It seems like lately, they're falling behind in every market they're involved in (I'm not sure about the workstation market, however), so it's good to see such bold claims about the G300. As for the mock-up, if the real card was really still a prototype, then I can understand a mock-up, but they really should've put more time and thought into the one they used. It makes me wonder, though. Rumor is that the first G300 cards should launch late this year. If the card is currently in such a state that it can't even be shown, how's it going to be ready so soon?

I guess we'll have to wait and see.
 

Kougar

Techgage Staff
Staff member
Some days I just wish I knew what fud Charlie stirs up so I can just avoid it, his stuff drives me nuts.

Sure it was a mockup card, that's standard practice. The one inside the demo was supposedly real, and that's where it matters.

At any rate, this should prove to be one hell of a GPU... I just hope for NVIDIA's sake it offers enough performance over 5870 that they can get away with th doubtlessly much higher prices they'll ask for it. I'm glad ATI is back in the game though, will force NVIDIA to price competitively, if not aggressively. :)
 

Doomsday

Tech Junkie
At any rate, this should prove to be one hell of a GPU... I just hope for NVIDIA's sake it offers enough performance over 5870 that they can get away with th doubtlessly much higher prices they'll ask for it. I'm glad ATI is back in the game though, will force NVIDIA to price competitively, if not aggressively. :)

yup yup!!
 

Psi*

Tech Monkey
I am absolutely fascinated with computer science over the past just 3 to 5 years. There are sooooo many ways to make complex algorithms faster ... and this does or could apply to any worth while game.

Given a GPGPU like this and add on MPI, distributed computing, multi-core processors, multi-socketed mother boards that can support multiple GPU cards ... I think it clear that hardware is far ahead of software development of making adequate use of this cheap technology.

What are the game developers doing to take advantage of this. Do any of them have a road map? I suspect that military developers are at least the inspiration of this kind of innovation.
 
Top