NVIDIA Clears Up Common CUDA Misconception

Rob Williams

Editor-in-Chief
Staff member
Moderator
From our front-page news:
NVIDIA, never being slow to issue corrections on behalf of others, cleared up some facts about CUDA and also Intel's Larrabee, in an e-mail to DailyTech. First and foremost, for anyone who hasn't taken a look at our Larrabee article from yesterday, definitely check it out, as it clears up a lot of what makes Larrabee the 'killer' solution in Intel's eyes.

As has been the common misconception, Larrabee is supposed to improve on NVIDIA's CUDA solution because it's a true x86 solution, allowing simple C/C++ code to be written, without the need to learn another language, like CUDA. However, in the NVIDIA-issued statement, it was mentioned that CUDA is not another language, but is just like Larrabee in that it itself is still a C compiler, one based on the PathScale C compiler.

From my understanding, CUDA does require separate C-based libraries in order to utilize the the architecture properly, but at this point, I'm unsure if Intel's Larrabee would work the same way. Intel does tout true 'plug-in-play', so to speak, and we may not know the true process of development for the architecture for a while. However, as it stands, NVIDIA's CUDA is not as horrible as others make it out to be, and I'm sure we'll see clear evidence of that in three weeks at NVISION.

nvidia_company_logo_large.jpg

NVIDIA's approach to parallel computing has already proven to scale from 8 to 240 GPU cores. Also, NVIDIA is just about to release a multi-core CPU version of the CUDA compiler. This allows the developer to write an application once and run across multiple platforms. Larrabee's development environment is proprietary to Intel and, at least disclosed in marketing materials to date, is different than a multi-core CPU software environment.


Source: DailyTech
 

Kougar

Techgage Staff
Staff member
I'm still a bit dubious of this. It would have been greatly to their advantage to make this info well known and publicize it...

It would be a huge blow to ATI though, who's idea of GPGPU computing involves "close to the metal" assembly language, or BrookGPU. And funnily enough, BrookGPU is a varient of ANSI C, a C language too. I don't know enough about coding to know how truthful NVIDIA is being, but supposedly ATI could claim the same if they wanted.
 
Top