Intel's Larrabee Computational Performance Beats Out Competition

Rob Williams

Editor-in-Chief
Staff member
Moderator
From our front-page news:
When Intel first announced its plans to launch its own discrete graphics card, Larrabee, it seemed like everyone had an opinion. If you talked to a company such as AMD or NVIDIA, laughter, doubt, and more laughter was sure to come about. The problem, of course, is that Intel has never been known for graphics, and its integrated solutions get more flack than praise, so to picture the world's largest CPU vendor building a quality GPU... it seemed unlikely.

The doubt surrounding Larrabee was even more pronounced this past September, during the company's annual Developer Forum, held in San Francisco. There, Intel showed off a real-time ray-tracing demo, and for the most part, no one was impressed. The demo was based on Enemy Territory, a game based on an even older game engine. So needless to say, hopes were dampened even further after this lackluster "preview".

But, hope is far from lost, as Intel has proven at the recent Super Computer 09 conference, held in Portland, Oregon. There, the company showed off Larrabee's compute performance, which hit 825 GFLOPS. During the showing, the engineers tweaked what they needed to, and the result was a breaking of the 1 TFLOP barrier. But here's the kicker. This was achieved with a real benchmark that people in the HPC community rely on heavily, SGEMM. What did NVIDIA's GeForce GTX 285 score in the same test? 425 GFLOPS. Yes, less than half.

This is in all regards impressive, because if Intel's card surpasses the competition by such a large degree where complex mathematics is concerned, then AMD and NVIDIA might actually have something to be concerned over. Unfortunately, such high computational performance doesn't equal likewise impressive gaming performance, so we're still going to have to wait a while before we can see how it stacks up there. But, this is still a good sign. A very good sign.

intel_larabee_120209.jpg

As you can see for yourself, Larrabee is finally starting to produce some positive results. Even though the company had silicon for over a year and a half, the performance simply wasn't there and naturally, whenever a development hits a snag - you either give up or give it all you've got. After hearing that the "champions of Intel" moved from the CPU development into the Larrabee project, we can now say that Intel will deliver Larrabee at the price the company is ready to pay for.


Source: Bright Side of News*
 

Kougar

Techgage Staff
Staff member
Well, it is not mentioned in that article, but other factors affect GPU computing performance tremendously... Single-precision versus Double-precision calculations being a huge one. Which calculation changes who has the "lead" just as much as whether OpenCL, DirectCompute, CUDA, or Brook was used. GT300 was exclusively designed for double-precision workloads as that is more often used in scientific computations.

Basically, I don't think there is a single benchmark that one-size-fits-all this new market. There are just to many different aspects.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Well, it is not mentioned in that article, but other factors affect GPU computing performance tremendously... Single-precision versus Double-precision calculations being a huge one.

That's true, but this is still a "standard" benchmark that people pay attention to. I agree though, it would have been nice to see comparisons to other, more recognizable, or worthwhile benchmarks.
 

Doomsday

Tech Junkie
'Intel graphics have always been sufficient for basic computing and multimedia functions but anyone who plays games wouldn't be caught dead running one.'

well, the performance was not up to Nvidia and ATI level so, it kinda makes sense. I always wondered why Intel was going through with this and had a Feeling(tha Force!) that Larrabee will not succeed!

but if ti had, think bout the competition, the prices would've come down like... well just say a $200 HD5870! or a $100 GTX260...w00t!
 

Kougar

Techgage Staff
Staff member
Sigh. I don't have time to post about this at the moment, but Larrabee is dead.

http://news.cnet.com/8301-13924_3-10409715-64.html

Well, interestingly the concept appears to still be going strong. Intel hasn't killed the idea, just the current planned launch of anything. It looks like they will wait until they can shrink the chip and add still more cores to it...

What's being hypothesized is that its current form just isn't powerful enough, and that kinda makes sense. 32-48 Pentium cores might be decent for calculations but that isn't going to hold water for graphics rendering against cards with 200+ extremely specialized function units.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
It makes total sense that the current version just wouldn't offer the power people want to see, but it's still unfortunate. Given the sheer number of doubters out there, I kind of wanted to see Intel release something that was impressive, so to cancel the "gen 1" card, so to speak, is a little upsetting.

You might be right, though... once the die shrinks and more cores can be fit on, Intel might push out an actual card. But by that time, AMD and NVIDIA may be even further ahead. It just goes to show... even as the world's largest processor maker, you can't just simply go and create a GPU. It's tough.
 
Top