GPGPUs?

Psi*

Tech Monkey
Is anyone here into this in someway? I have gotten interested in Nvidia's Tesla C2070 of late. But there is a strong suggestion that 2 or 3 GTX 580s could be faster. There is a German company Gainward that has announced a 3GB version. So a pair of those linked via SLi? equals 6 GB?

How do they appear to software? As one CPU or 2?
I need double precision, can the GTX 580s be made to run double precision code?

That C2070 has 6GB DDR5 with ECC. It is actually a lower clock than the gamer cards. I read someplace that is because a gov lab demanded the 24x7 reliablility. I have found it for ~$2.5K with the typical price at over $3K. Also saw someone indicate that turning off the ECC also significantly bumps speed.

After writing all that, it is starting to sound simpler to just get a C2070 which would also be a lot more energy efficient.:confused:
 

DarkStarr

Tech Monkey
580s would be faster but no ECC and that, what would be smart is see if there is a model of professional card with 3gb of ram and the same shaders and flash it to it and that should enable the features. It will appear as multiple GPUs, even in SLI. A bios flash would probably enable DP on a 580 but possibly downclock it, not that you couldn't OC it.
 

Kougar

Techgage Staff
Staff member
The cards can do double precision, yes. Desktop cards have double-precision, but the 580 is software-limited to 1/4th of the GPU core's actual capability... this is to protect the Tesla model line. It also depends on the program and/or software you choose to implement GPU processing with... specifically CUDA versus some other GPU language.

I don't believe you need to SLI anything to use multiple GPUs for professional CUDA applications. Last I checked for desktop use (things like Folding@home) the software was written in a way that required SLI to use multiple GPUs properly, but that may of been how the program was designed as it wasn't written in CUDA.
 
Last edited:

Psi*

Tech Monkey
I ordered a C2070 ... uhnnn ... :eek: Found a site that sells it for somewhat less than $3K. I lost a few night's sleep over it then did it on a Friday night. That way I had the rest of the weekend to think about it & cancel if I got cold feet ... didn't happen. I have seen that ECC can be turned off for full bandwidth.

I wonder if it can be OC-ed? Had to ask because it is the obvious thought! Probably like tweaking an 800 HP engine for another 50 HP.

Wish in the past that I had bought some kind, any kind of GTX video card to experiment with.

Does someone have 1 or 2 or even 3 GTX 580s somewhere out there? Not sure what to do if I do find someone :confused: My software and at least 1 major competitor to that s/w use CUDA.
 

Kougar

Techgage Staff
Staff member
I'm sure it can be overclocked, the question would be are ya willing to risk errors in whatever computational load running on it though? I'm sure you're familiar with unstable graphics cards from overclocking, but it's really the small errors that should be considered. If there's a small error during graphics rendering it usually amounts to nothing and won't be noticed or cause a problem. It's only when there are massive errors or that the errors build up does the game or GPU driver crash. But for number crunching, a small deviation in the output is still going to be the wrong number. It depends on what sort of workloads you use I guess. :)

And I'm afraid not, I only run single GPU setups. Multiple GPUs have progressed nicely over the years, but I don't see any point or need to using them personally. I'd rather invest in a single powerful card and sit on it for a few years, otherwise I'd have been happy to test something for ya.
 

Psi*

Tech Monkey
I bought the extended warranty. 1st time in years!

Wouldn't it be interesting to know how a small error might manifest itself? Yesterday I found & immediately lost, a "commentary" stating that the trimmed back Teslas (versus something like the GTX 580s) are a result of extensive 24X7 testing at Oak Ridge National Lab. This is multiple cards and 24X7. I can imagine that multiple cards with higher clocks, more cores, etc etc could be cause some measurable issues. One card tho?
 

Tharic-Nar

Senior Editor
Staff member
Moderator
You've probably already read these Ars articles, but they are covering the implementation of a home-built HPC system using NVIDIA graphics cards, namely GTX 580's and make a quick comparison to the old Tesla's. In the 3rd or 4th part due out soon, they'll be covering user submitted benchmark tests on the system.

The first article is rather simple for the most part, as it covers basic hardware. The second article is the real complexity as it covers the software aspects such as CUDA and PyCUDA wrappers. Very much worth the read.

Part 1: http://arstechnica.com/science/news...ce-computing-on-gamer-pcs-part-1-hardware.ars
Part 2: http://arstechnica.com/science/news...-on-gamer-pcs-part-2-the-software-choices.ars
 
Last edited:

DarkStarr

Tech Monkey
Sadly, the Teslas seem to be built off of GTX470 tech so 580s should be much faster. (And cooler) Flashing wouldn't be possible unless you too a 470 3gb to a Tesla 2050 but AFAIK no 470s are 3gb. Looks like it cant be done till they release the newer models.
 
Top