NVIDIA to Enable SLI on Select X58-based Motherboards

Rob Williams

Editor-in-Chief
Staff member
Moderator
One of the biggest drawbacks of Intel's upcoming X58 platform has been the lack of support for NVIDIA's SLI. Well, that changes, and the way it's going to be handled might just surprise you.

NVIDIA's SLI won't be available on X58? Think again! The company has just announced 'native' support on select X58-based motherboards, all without the use of their own chipset. Dual-GPU configurations will be enabled without a bridge, while the higher-end offerings, including a potential Quad-GPU, will.


Read through the full briefing here and discuss it here if you have comments to make.
 

Kougar

Techgage Staff
Staff member
It's good to see NVIDIA isn't so completely self-centered that they will acknowledge and actually listen to common sense. If they hadn't done this then ATI would have been guaranteed a majority share of the most lucrative multi-GPU markets and a larger slice of NVIDIA's shrinking pie.

To borrow a few interesting quotes:

Tech Report said:
Intel is even welcome to submit its own X58 motherboards for SLI certification, Nvidia spokesman Tom Petersen said, although not all board makers will be offered the same set of licensing options at the same price.

Anandtech said:
We were also told that while Intel’s own X58 motherboard isn’t currently on the certified list, Intel is more than welcome to submit it for certification.

NVIDIA just does not like Intel I guess... :rolleyes:
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Something tells me that Intel wouldn't jump on the bandwagon to add SLI to their own X58 boards anyway. I could be wrong, but I just can't see them giving in and admitting defeat. They did it with Skulltrail, but if they do it with the upcoming boards, it's likely to add $100 to the price, whereas it might be $50 or less with an ASUS or someone else.

I'm just glad there will be SLI support on X58... now we can move onto something else to whine about.
 

Kougar

Techgage Staff
Staff member
Well, isn't Intel the one not giving NVIDIA a license to build QPI logic for Intel processors?

Although it is interesting that NVIDIA will be able (and stated they intend) to build Lynnfield / LGA1160 platforms late next year. With that much time to build a new, IMC-less chipset they at least have the change to come up with something surprisingly good.

Intel still offers Crossfire after all this time, despite directly keeping AMD afloat through ATI in the process. Bad blood aside, I would be surprised if at least Intel's flagship X58 board didn't offer SLI support.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Well, isn't Intel the one not giving NVIDIA a license to build QPI logic for Intel processors?

It's just a childish game that never seems to end. If I can't have that, then you can't have this. Intel wants NVIDIA to open up SLI and make it available for all motherboards, and since they won't, Intel won't allow NVIDIA to use QPI in their own chipsets. That's the way I've heard it, anyway.

Intel still offers Crossfire after all this time, despite directly keeping AMD afloat through ATI in the process. Bad blood aside, I would be surprised if at least Intel's flagship X58 board didn't offer SLI support.

That's hard to say, but it would be good... I sure wouldn't complain!
 

Kougar

Techgage Staff
Staff member
Yeah, I did give up keeping track of "who started it", far as I'm concerned both parties deserve equal blame for perpetuating it!

A little off topic, but I really can't wait for a X58 board. P35 is great and 16x+4x PCIe slots seems fine for dual GPU use... But when you realize the hard way that if you use both PCie 16x slots the board is forced to deactivate all PCIe 1x slots, that was a trifle annoying. Found out the hard way that trying to run Folding@home on multiple GPUs meant I couldn't use my Xonar sound card... irony is if I got the PCI version I wouldn't of had any problem at all. ;)

X58 seems to offer the largest selection of PCIe 2.0 slots yet, and with bandwidth to back it all up, finally. Gigabyte had their X58 Extreme showing off at NVISION if you believe TGDaily.... it has six GPU capable PCIe 2.0 slots!

X58 can do dual electrical 16x slots, four 8x slots, or using Gigabyte's board 8x+8x+4x+4x+4x+4x even. It's not a Gigabyte board unless its in excess to the extreme. :p I'd love to have the ability to turn one of those into a 6TFlop Folding@home monster though, that much GPGPU power on a board is just wrong. And would overload my UPS anyway, if not the wall outlet too...
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Both can be blamed, but I still wish NVIDIA would open up SLI for all motherboards. They just proved that it can be done... they just choose not to. I understand it was a business decision, but I personally believe that SLI would look far more attractive to many more people if they knew that they didn't have to own an NVIDIA-based motherboard.

As for the Gigabyte board with six GPU-capable slots... I'm not sure where that was. The one I was shown shows just four slots. It really does prove that NVIDIA has Quad-SLI in the works though.

Kougar said:
X58 can do dual electrical 16x slots, four 8x slots

If two nForce 200 bridges are used, it will increase the output to deliver four 16x PCI-E slots.
 

Attachments

  • gigabyte_x58_extreme_082908.jpg
    gigabyte_x58_extreme_082908.jpg
    170.6 KB · Views: 729

Kougar

Techgage Staff
Staff member
Yep, that is the board!

Now look closely at those two PCIe 4x slots... notice that Gigabyte chopped off the ends and left them open-ended? You can plug any card, PCIe 16x or whichever you please into them. It doesn't matter because those are only electrically x4 slots regardless...

32 electrical lanes to split between 6 PCIe 16x capable slots, no N200 chips or fancy splitter chips needed. :)

I believe it was TTR that was focusing heavily on the fact that NVIDIA stated they wouldn't care if users hacked the BIOS key and used it to unofficially SLI other non-certified boards. The way the NVIDIA person was quoted, it sounded like he said directly they would not care and would not check for anything beyond the actual BIOS key with their drivers, not the board model or anything else.

I agree, it's still not wide-open SLI, but on the other hand I can see how they would wish to certify boards for SLI operation. If users are free to "hack" those BIOS keys, then it really wouldn't matter much, except probably the usual result of bad hacked-BIOS flashes and people ruining their unapproved boards that would follow...
 
Last edited:

Rob Williams

Editor-in-Chief
Staff member
Moderator
Kougar said:
Now look closely at those two PCIe 4x slots... notice that Gigabyte chopped off the ends and left them open-ended? You can plug any card, PCIe 16x or whichever you please into them. It doesn't matter because those are only electrically x4 slots regardless...

Essentially though, this is good for Folders only, correct? Wouldn't the cards in the 4x slots have their performance held back with that, or no? I know that if you use two GPUs in Crossfire 8x, it does effect the performance, so I'm curious if similar differences would be seen in this particular case.

Kougar said:
I believe it was TTR that was focusing heavily on the fact that NVIDIA stated they wouldn't care if users hacked the BIOS key and used it to unofficially SLI other non-certified boards.

Yeah, NVIDIA didn't seem to care too much about that. I think it was Ryan Shrout who said "You know this will be hacked." and they responded back with something to the effect of, "Of course". They know it's going to be hacked/reverse engineered, and they really didn't seem to care.

Past Nehalem, I could see these drivers becoming hackable for prior-gen boards, like P35, P45, X38 and X48. If it's all software driven rather than hardware inside the driver, all that would really need to be changed would be the board IDs. NVIDIA took the 'hacking' question in stride and really didn't seem to be that worried. It's almost like they want it to happen.

Kougar said:
except probably the usual result of bad hacked-BIOS flashes and people ruining their unapproved boards that would follow...

I really don't seen flashed BIOS' being the route to take here. I think any hacks would be software-driven, because hacking a BIOS would be on a per-BIOS-basis, and the normal user isn't going to want to go that route. The driver could potentially be reverse engineered to fore go the checking of board IDs entirely. That's what I'd expect to see before anything else.
 

Kougar

Techgage Staff
Staff member
Essentially though, this is good for Folders only, correct? Wouldn't the cards in the 4x slots have their performance held back with that, or no? I know that if you use two GPUs in Crossfire 8x, it does effect the performance, so I'm curious if similar differences would be seen in this particular case.

That is the thousand dollar question. One site reported yes (IIRC it was TweakTown) but other sites have disproven it. Generally, it is safe to say there is a great deal of confusion about this, partly because of PCIe 2.0 muddling the issue.

There are two full PCIe slots... not sure if they can operate at 16x or only 8x. But because this is PCIe 2.0 an 8x slot shouldn't affect the performance in the slightest.

PCIe 1.1's 4x slot was a good bottleneck, 8x should be a bottleneck to some configurations... but PCIe 2.0 8x offers the same bandwidth as a PCIe 1.1 16x slot, so it should be fine.

But to answer your point, yes certainly 4x slots wouldn't be worth it for gaming, even if they equate to 8x in bandwidth now. Anything else from physics, PCIe based cards, or GPGPU purposes such as folding would be great though. 4x shouldn't even affect folding performance.

I guess we'd need to wait to speculate on it because (Going by my current board) I would also assume the PCIe lanes can be dynamically assigned... Which means Quad SLI or Quad Xfire using four GPUs and four 8x PCIe lanes should be possible, for those that wished to go that route.
 

Merlin

The Tech Wizard
I can see a mix up about to happen, some boards with Sli enabled and some that are not
Even though they may be labled

Merlin
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Kougar said:
PCIe 1.1's 4x slot was a good bottleneck, 8x should be a bottleneck to some configurations... but PCIe 2.0 8x offers the same bandwidth as a PCIe 1.1 16x slot, so it should be fine.

I'm still confused about how the card will work when half of its pins are not connected to a slot, though.
 

Kougar

Techgage Staff
Staff member
Ah, I assumed ya knew! :)

Biggest advantage to PCIe over AGP is that the PCIe standard is 100% interchangeable. I can run my ASUS Xonar PCIe 1x card in ANY PCIe express slot. 16x, or 1x, or anything in-between. If I plug it into a 16x slot it will run at 1x, as designed.

Same goes vice-versa, assuming the slot is open-ended OR card length is a non-issue, then you can plug a PCIe 16x GPU into a 4x or even 1x slot, if you had the mind to. This is the first time I have seen open-ended 4x slots though. So for example you can plug your PCIe 4x RAID card into a PCIe 8x slot, and it works exactly the same as a 4x slot.

The spec is designed so that you can run 1x, 4x, 8x, or 16x cards in the slot. IIRC the power and control pins are those before the notch and so they don't change, the rest are just signal pins. I just know that the slots/cards are interchangeable and designed to be that way... as ya well know the only issue is potential bottlenecks from running a card designed for a higher bandwidth slot.
 
Last edited:

Rob Williams

Editor-in-Chief
Staff member
Moderator
Well I understand that you can plug a 1x card into a 16x (I do that with my PCI-E Xonar also), but I didn't understand why half the pins would be left out in the open and the card still operate at 100%. Even if you plug a 1x card into a 16x slot, all the pins are taken care of. 60% of the pins won't be left exposed like they will be when a 16x card is plugged into a 4x slot, or whatever those are.

It just strikes me as odd that half the pins could be left exposed and the card would continue to operate just fine. At the very least, I'd have to assume that the performance would be really degraded.

This is something I'm going to have to test when I find the time...
 

Kougar

Techgage Staff
Staff member
Ohh, I now understand what you are saying. I was thinking the inverse, the pins in the slot itself. Oops.

The thing is, take a close look at those PCIe 16x slots that only can operate at 8x electrically, max. If you look real closely you will notice half of the pins inside the slot are not even there, it's empty plastic. ;)

My EP35-DS4 motherboard is this way with the 2nd PCIe 16x slot, because it only has 4x lane capability.

Madshrimps got their mits on one of these demo boards, they stated it was 2x x16, 2x 8x, 2x 4x operation.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
I gotcha... so regardless of what card is put in there, it will just downgrade to the operating speeds? Does that logic work with all cards, or just GPUs? I'm assuming that the 'important' pins are the first ones in line then... interesting. I wasn't really aware of that... thanks for pointing it out.

Kougar said:
Madshrimps got their mits on one of these demo boards, they stated it was 2x x16, 2x 8x, 2x 4x operation.

Sounds good to me... even Crossfire with four GPUs in an 8x configuration would kick ass, you'd imagine. Sounds good though... nice that Gigabyte is thinking outside the box.
 

Kougar

Techgage Staff
Staff member
Yep, all the critical pins are in before that first notch. It is part of the PCIe specification, so it should work on any card rated for PCIe!

I think Gigabyte more simply ran out of things to do excessively. Four "BIOSs", four Gigabit ethernet ports, four sets of three power phases, four eSATA ports due to included brackets... now they added four PCIe 16 slots with two extra 4x slots. :D
 
Top