AMD HD 6950 1GB vs. NVIDIA GTX 560 Ti Overclocking

Rob Williams

Editor-in-Chief
Staff member
Moderator
AMD and NVIDIA released $250 GPUs last week, and both proved to deliver a major punch for modest cash. After testing, we found AMD to have a slight edge in overall performance, so to see if things change when OCing is brought into the picture, we pushed both cards hard, and then pit the results against our usual suite.

You can take a look at our full OCing results and then discuss the article here!
 

Relayer

E.M.I.
Nice job, Rob. Interesting results, as well. Especially, the power consumption. I was fully expecting the 6950, once O/C to draw more power than the 560. I also thought there would be a bigger difference in temps with the current trend of nVidia's improved cooling.

Overall performance changes from one reviewer to the next depending on preferences and setup tendencies. Like you said, the features between the two are really the biggest differences.

Maybe, in the future you could spend a bit more time covering those differences in a bit more depth? Everyone mentions PhysX, but nobody actually tests with it on. Same with CUDA. Nice feature, if you have software that uses it. Eyefinity? What kind of frame rates and what level of eyecandy can you expect to be able to use effectively. Video encoding, etc...

P.S. Did you try and unlock the 6950? Not to use the figures in the review. Just curious if it seems like current models of the cards are still coming through with virtually all of them unlockable, or if they are starting to fuse them off physically, as some have rumored they are going to do? Strike that. Forgot that the 1gig card doesn't have the dual bios setup. My bad.
 
Last edited:

Kougar

Techgage Staff
Staff member
The phsical hardware aspect of Physx/CUDA aren't features that get "turned on" and cause the card to draw additional current. They are parts of the core that are already active and already always on even if not being utilized in software due to how they are built into the core's processing units itself. Only a small part of each shader group is devoted to processing them, so in Physx/CUDA related tasks it won't amount to anything more than what the card uses for full 3D gaming.
 

Relayer

E.M.I.
The phsical hardware aspect of Physx/CUDA aren't features that get "turned on" and cause the card to draw additional current. They are parts of the core that are already active and already always on even if not being utilized in software due to how they are built into the core's processing units itself. Only a small part of each shader group is devoted to processing them, so in Physx/CUDA related tasks it won't amount to anything more than what the card uses for full 3D gaming.

I mean using apps that use CUDA and show the improved productivity possibilities. Game benchmarks with PhysX on to see frame rates. Two cards (when available, of course) in 3dSurround with PhysyX. Is there enough GPU power to actually use those features together (3D, multi-monitor, and PhysyX)? They are touted as reasons to buy nVidia cards, but rarely, if ever, are they tested (Never have I seen them tested together). For AMD, Eyefinity benchmarks. GPGPU tasks their cards can run.

I'm not expecting the cards to use more power when using these features.
 

Kougar

Techgage Staff
Staff member
My apologies then as I misunderstood your question!

Probably the biggest reason you don't see 3D, multi-display, and PhysX combined is because 3D capable monitors are uncommon and expensive... and I've yet to hear of any games that actually do 3D well. Although that may just be me, as I wasn't that impressed with most 3D TV & movies either.. But in any case, are rare as they are, multiple displays in 3D would require more than one 3D capable monitor. I may be mistaken but 3D displays don't require additional GPU processing overhead either, they simply change how the final picture is displayed. At most this might result in a slight tweak in CPU driver overhead, but I don't believe so.

As for PhysX, it originally required a separate GPU dedicated for PhysX computing. As NVIDIA's drivers matured this is generally isn't a requirement anymore, but it still requires 512mb of video RAM with 1GB recommended if using a single GPU concurrently for both purposes. But the real reason it isn't tested is simply because the gains resulting form overclocking wouldn't be noticed. The PhysX games I am aware of put a fairly low upper limit on how many physics objects that get created, basically with a high-end GPU there are still only going to be so many boxes or explosion effects regardless of what the card is capable of. I am not aware of any games where PhysX is a bottleneck unless it's an old or very cut-down GPU in question...?

Having conducted GPU overclocking tests that utilized a CUDA-based folding program, the increase in performance for raw computations scaled around the same as FPS would. The actual GPU core architecture (what is enabled/disabled, number of shaders, shader core design, etc) per model is still going to have by far the largest impact on performance regardless, any GPU overclocking will just be a little extra headroom in the end.
 

Relayer

E.M.I.
I wasn't aware that there was no performance hit for 3D. I thought that each frame had to be rendered twice, and, therefore would incur a performance hit. Thanks for clearing that up for me. PhysX does incur a performance hit, and I think it would be useful to see what kind of frame rates you should expect with it on, running a single card, seeing as that's how ~90% of people would use it. I don't understand the reference to O/C'ing as being a reason to not test certain features? Typically, except for one page, most reviews concentrate on stock clock tests.

I guess I feel like if it's not worth testing these features, then we shouldn't tell people to base their buying decisions on whether a card has them or not. Don't tout CUDA as a benefit and then not show a single app that you can expect a performance increase with, due to it.
 

Tharic-Nar

Senior Editor
Staff member
Moderator
Actually, Kougar is incorrect this time (sorry). 3D does incur a performance loss, of ~50%, sometimes a little more. This is due to the fact it has to render a scene twice from 2 different angles + overhead.

Rob can't see in 3D, so he can't test it as part of the suite, since he wouldn't know if it was working properly. Additionally, there are only 15 officially supported 3D vision games, the rest are tricked into working using the NVIDIA drivers, so results will vary. Full list available here.

As for CUDA processing, this is a very complicated problem. We've constantly been going back and forth over inclusion of tests, but there are a number of roadblocks as it were. First of all, the technology is still immature. There are a number of apps that can make use of GPGPU processing, but often it's hard to benchmark.

Something like video encoding, you are restricted by the codec that can be used, and the quality results are terrible in comparison to x86 (as shown by Anand with their QuickSync test on Sandy Bridge). RayTrace rendering engines using CUDA have very limited feature support, they lack various processing methods and are completely useless for final render. Photoshop doesn't use CUDA, but OpenGL for it's processing. There's folding, but again, rather limited and constantly changing, which is the second major problem.

For reliable benchmarks, we need fixed metrics and tests. Since CUDA is in a constant state of flux, it's very hard to get accurate and predictable results. Software patches and performance increases and efficiencies would always make our results null. What accounted for the increase in performance, the card, the drivers, or the software?, so we would need to retest all cards with new drivers on the latest software - largely for the benefit of a very small niche.

Tests could be done, but they would be very intermittent, and probably not part of the regular test suite. Which is how we normally test them. PhysX we have done tests with, but again, as separate articles. Such as with Mafia 2. Sure, we could do more tests in the future, but it's finding a game that makes real use of PhysX instead of just the odd extra bit of flying gravel... (Crysis 2 perhaps?).
 
U

Unregistered

Guest
brands used

Im interested in knowing what brand and version 6950 1gb was used in that review? After overclocking it seems it was able to keep pace with the 580gtx, which is very impressive.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Im interested in knowing what brand and version 6950 1gb was used in that review? After overclocking it seems it was able to keep pace with the 580gtx, which is very impressive.

Unless I mention a brand directly, you can assume that the board being tested is reference :) I'll make sure this is obvious in the future.

What's nice is that the cooler that came with the reference HD 6950 was rather standard, so anything third-party on the market should be able to deliver much improved cooling and overclocking abilities. That's the reason I didn't even include a photo of the card... the cooler used was generic and not too attractive ;-)
 
Top