NVIDIA Uses Undisclosed Trick to Improve 9600 GT Performance

Rob Williams

Editor-in-Chief
Staff member
Moderator
From our front-page news:
NVIDIA's 9600 GT is a well-rounded card as we found out throughout numerous reviews late last month, but according to techPowerUp!, there is a new feature with the card that the company did not relay to reviewers. Even better, after some investigation, it seems most NVIDIA employees were unaware of the feature, as well.

On most GPUs, increasing the PCI-E bus frequency in the BIOS yields no performance benefits. It increases the allowed bandwidth between the GPU and the Northbridge, but the extra bandwidth normally goes to waste. On the 9600 GT however, the card automatically increases the PCI-E frequency and does result in a slightly faster card. The compared 8800 GT used in the article showed no improvement whatsoever with a faster PCI-E clock.

No gaming benchmarks were used in the article, which would have been nice, but the fill-rate feature of 3DMark 2006 showed a significant increase. The question is... why did NVIDIA not disclose this feature to reviewers or even make note of it in their press release? It's a feature that does improve performance, so it's odd to omit a mention altogether. That aside, free performance is good performance.

<table align="center"><tbody><tr><td>
nvidia_9600_sli_official_030308.jpg

</td></tr></tbody></table>
On "normal" VGA cards, when you increase the PCI-Express bus frequency you increase the theoretical bandwidth available between card and the rest of the system, but do not affect the speed the card is running at. On the GeForce 9600 GT, a 10% increase in PCI-Express frequency will make the card's core clock run 10% faster!

Source: techPowerUp!
 

Merlin

The Tech Wizard
What is more amazing, is that this motherboard has no CPU or RAM

Okay, I'm kind of goofy, been up all night with the FLU....it's a nationwide epidemic here in the states

Merlin....... on medication
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Haha, you are quite the smart ass. It also has no anything, just for the record. Sorry to hear you have the flu... it really seems that it's going around. I know a handful of people who either have a cold or a full-fledged flu.
 

Kougar

Techgage Staff
Staff member
Not happy at all about this. Simply raising the PCIe bus frequency has no performance improvement, so there was no reason to tie in GPU clocks to the PCIe bus frequency... especially when everything from SATA drives to the Gbit ethernet uses the PCIe bus.

I'd bet anything there is going to be at least a few users reliving those overclocking the PCI 33MHz bus days again...
 

NicePants42

Partition Master
so there was no reason to tie in GPU clocks to the PCIe bus frequency...

TechPowerUp! said:
It has the potential to offer hassle-free performance improvements to a large number of less experienced users. Being able to adjust this frequency in most modern BIOSes is a big plus because it will be applied without any software installation requirement in Windows (or any other operating system - there is your Linux overclocking).

I'd bet anything there is going to be at least a few users reliving those overclocking the PCI 33MHz bus days again...
From reading the article, it sounds like only certain nVidia chipset motherboards will automatically overclock the PCIe bus - I would hope that motherboard OEMs would be intelligent enough to either use components that can handle the extra speed boost, or limit the speed boost.

Assuming that's the case, it should only be a problem for those who manually overclock the PCIe bus in the BIOS to a degree where either the GPU core or some component on the motherboard becomes unstable. This shouldn't upset too many people - maybe the occasional Linux user will become depressed that he can only OC his GPU 10% while remaining stable? The rest of us can just OC with rivatuner as usual.

I don't see a problem with the technology itself, although it would've been nice if nVidia told people about it.
 

Kougar

Techgage Staff
Staff member
Well, here are the problems I see with it.

Users are going to overclock their GPUs the same ways they always have... but since almost every program does NOT display the actual clocks they will find that their overclocking goes unstable at much lower overclocks than normal, and not have any idea why. Two or three sites already ran into this problem when reviewing 9600GT cards.

Then, there is the potential that NVIDIA was trying to cheat the system. How is a user to know when the PCIe bus is being dynamically overclocked and set back to stock unless they are actively monitoring it? Most tools do not report the PCIe bus frequency, as until now it had no purpose.

There really is no reason to actually have done this, except possibly cheating the system. I have yet to hear a single valid reason for this "feature". Since NVIDIA chipsets must be used for SLI benchmarking, it ensures they could give a nice boost to SLI scores if they wanted. Do keep in mind that most sites benchmark with a graphics card set to "stock reference" clockspeeds, not just the factory overclocks... this would also let Nvidia secretly up-clock these stock reference-clocked cards.

There is no reason to mess with the PCIe bus, and even settings of 110-115 are the upper limit for most motherboards. Overclocking the PCie bus will also negatively impact any CPU overclocks, or potentially make stable CPU overclocks go unstable for thsoe OCers that go beyond a moderate overclock and have pushed the MCH to its limits. The fact that this is all going on behind the curtain just makes it even worse.
 
Last edited:

NicePants42

Partition Master
Valid points.

But the problems that the user experiences can be easily handled simply by knowing what's going on.

That leaves the 'cheating the system' issue. Obviously nVidia was trying to get ahead performance-wise. If you want to call it 'cheating the system', feel free, but this isn't just some driver optimization, it's actual increased clockspeed, which means real, tangible benefits. While I think nVidia should've disclosed what they'd done, I'd sooner encourage ideas like this than condemn them.
 

Kougar

Techgage Staff
Staff member
That is just it. They could simply raise the clockspeed. Why tie in clockspeed to the PCIe bus and lie about the crystal frequency, when they could simply set a higher arbitrary clockspeed to begin with and be done with it. Not to mention then be able to advertise the higher clockspeeds, many people buy their gaming GPUs based on factory overclock speeds or take it into consideration when choosing between two similar brands...

The crux of my point is why involve the PCIe bus at all. That's like taking your car and driving around the block and through the alley to bring in the trash cans right behind your house. Why not just walk out the back door. :p
 
Top