Integrated Graphics a Thing of the... Future..?

Jakal

Tech Monkey
As someone still using a Core2 processor and ready to upgrade I want to learn as much as I could about the upcoming Haswell chips. My search began with expected release dates and moved on to the chips updated architecture and improvements. I stopped to post this once I learned about the up-and-coming chipset, Z87.

Ivy Bridge's home is the X79 chipset, and for good reason. Quad channel memory support for up to 128GB DDR3 RAM, native PCi3.0 (2x) x16, and a plethora of data pipelines to go with it. Along with the IB architecture comes better integrated graphics. There's even a video showing off the HD 4000 with an i7 3770. It maintains 20+ fps into High settings on BF3. No, these numbers aren't great, but imagine if you double or triple the on-die graphic abilities. That's what Haswell and Z87 bring to the table; integrated graphics that may not be great, but at least sufficient for most games on the market today.

The problem with this 'table' is PCIe support drops from 2 x16 slots for SLi/CF to x16 and x8. The current X79 chipset gives enthusiasts better performance when running two cards. Before there's any flaming, I only spoke of 2 PCIe slots because that's the most commong SLi/CF configuration. This raises a serious question in my mind. Are we consumers missing out because Intel is pushing their integrated graphics solution? Why wouldn't a newer architecture support current high-end graphic options? Will consumers opt for an add-on card?

It's very possible, even likely, things will change between now and June '13, but a lateral graphics move seems like an odd thing to do. This also begs to question the affect on graphic card prices. Especially if an integrated gpu can keep up with current-gen options.

Thoughts?
 

Optix

Basket Chassis
Staff member
To be honest, I haven't heard of the Z78 chipset so I'll need to do some reading, but think of it this way.

Entry level (H-series boards) is usually a single slot at x16 and maybe a second running at x4 from the chipset, not the CPU.

Mainstream (Z-series boards) usually offer a single slot at x16 or knock it down to x8x8 if two cards are being used. Again, there could be another slot running at x4 or even x1 from the chipset.

Enthusiast (X-series boards) are the creme de la creme and offer x16x16 (and sometimes another x16 with a bridge chip).

If the Z-series boards offered everything that the X-series boards did, there'd be no reason to even put out the X-series. Either that or the Z-series boards would take a sharp jump in price because they would now be the enthusiast offering.

Integrated graphics will never be able to touch high resolution gaming with a stand alone GPU. It just can't happen. With that said, IGPUs have made HUGE gains in terms of performance. Capable of light gaming and 1080p resolution, there's just not reason to pick up a discrete GPU for a home theater PC or if you want a daily driver to check your email with an occasional WoW session thrown in.

You need to be really careful when interpretting PCI-e specs because some manufacturers twist things in such a way that you think you're getting x16x16 when you're really only getting x16x8.

***EDIT: After several searches I can't find squat on the Z78 chipset. Hey Rob, help a brotha out!
 
Last edited:

madmat

Soup Nazi
To be honest, I haven't heard of the Z78 chipset so I'll need to do some reading, but think of it this way.

Entry level (H-series boards) is usually a single slot at x16 and maybe a second running at x4 from the chipset, not the CPU.

Mainstream (Z-series boards) usually offer a single slot at x16 or knock it down to x8x8 if two cards are being used. Again, there could be another slot running at x4 or even x1 from the chipset.

Enthusiast (X-series boards) are the creme de la creme and offer x16x16 (and sometimes another x16 with a bridge chip).

If the Z-series boards offered everything that the X-series boards did, there'd be no reason to even put out the X-series. Either that or the Z-series boards would take a sharp jump in price because they would now be the enthusiast offering.

Integrated graphics will never be able to touch high resolution gaming with a stand alone GPU. It just can't happen. With that said, IGPUs have made HUGE gains in terms of performance. Capable of light gaming and 1080p resolution, there's just not reason to pick up a discrete GPU for a home theater PC or if you want a daily driver to check your email with an occasional WoW session thrown in.

You need to be really careful when interpretting PCI-e specs because some manufacturers twist things in such a way that you think you're getting x16x16 when you're really only getting x16x8.

***EDIT: After several searches I can't find squat on the Z78 chipset. Hey Rob, help a brotha out!

That's because it's Z87 not Z78.

http://lensfire.blogspot.com/2012/07/22nm-intel-haswell-processor-release.html
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Just to be clear, X79 is for Sandy Bridge-E, not Ivy Bridge. IB is technically superior architecturally to SB-E, but it lacks the quad-channel memory controller and six-core options. At the moment, I am not even sure when the next six-cores are speculated for release... all Haswell talk specifically tackles quad-cores and under.

Jakal said:
No, these numbers aren't great, but imagine if you double or triple the on-die graphic abilities.

We've been on a path for a while where IGPs might negate the need for sub-$100 graphics cards, and Haswell might seal the deal. But for me, support also matters. I haven't tested out Intel IGP for a while, but last time I did (with Clarkdale, I believe), I had issues with games crashing on the IGP but didn't with even the smallest GPU I could find. Even now, IGPs are suitable for the masses, but your gaming requirements still have to be minimal.

IGPs are never going to be better than a dedicated card, if that's what you're asking. Even with smaller die-shrinks, there's just no room to fit in all of what a dedicated GPU can with its large PCB and allocated die-area. I'm sure these CPU sockets have some power-limit as well, ruling out the possibility of having a truly competitive GPU be placed there.

Jakal said:
The problem with this 'table' is PCIe support drops from 2 x16 slots for SLi/CF to x16 and x8.

I've never seen evidence of this mattering, to be honest, and the only time I'd ever be concerned was if you were to go the quad-GPU (on two cards) route, and maybe not even then. It'd be a different story if we were still restricted to PCIe 1.0, but current motherboards sport 3.0, so we have more than enough bandwidth for even the beefiest of cards.

In the overall scheme of things, what matters a lot more is CPU performance.
 

Tharic-Nar

Senior Editor
Staff member
Moderator
I'd have to emphasise Rob's last point there. x16x8 really doesn't matter when you are dealing with PCIe 3.0 as far as GPUs goes. PCIe 2.0 x8 has more than enough bandwidth for a top end dual GPU card. PCIe 3.0 doubles the bandwidth further. Technically, you could run a top end dual GPU in an x4 slot with PCIe 3.0 - the reason they can't is because of power. So no, enthusiasts are not being given the shaft as far as limited PCIe lanes goes, there's plenty of bandwidth to go around.

PCIe SSDs on the other hand... while they won't be able to max out bandwidth at the moment, give them a couple years and I'm sure they could - not that a typical consumer, or indeed enthusiast would be able to take advantage of that kind of storage bandwidth (talking queue depths of 16+ to actually start making any real use of that, and a typical - highly stressed home pc barely scrapes a queue depth of 2).
 

Jakal

Tech Monkey
Z77 vs X79

The Z77 supports 3rd Gen Intel Core processors, native USB 3.0, and better storage support, while X79 supports the 'Enthusiast' LGA2011 cpus, quad DDR memory controller, and native 2 x16 PCIe bandwidth.

Let's be honest here. We're at a point in CPU progression where overclocking is basically unnecessary and the graphics option is the real bottleneck. That's not to say scooping up a cheap entry-level processor and putting the fire to it doesn't save you a bunch of money.

Just thought it was an interesting comparison to what the Z87 offers. I know we'll probably see an X87 release, but as for now this is all we have to go on.

Good stuff guys.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
If there existed a Z77 six-core CPU, X79 and SB-E would have no reason to exist - outside of the extreme example where someone needs the massive memory bandwidth a quad-channel controller can offer (not a regular enthusiast).

I'm still at a loss as to when the next -new- six-core is due. It can't possibly be with the architecture after Haswell, because that'd be way, way too long of a wait. There's another six-core due over the next month or so, but it's still X79.
 

RainMotorsports

Partition Master
Let's be honest here. We're at a point in CPU progression where overclocking is basically unnecessary and the graphics option is the real bottleneck. That's not to say scooping up a cheap entry-level processor and putting the fire to it doesn't save you a bunch of money.

Overclocking unnecessary? Man id have to kill myself the time I would lose rendering models and encoding video by dropping my 1.2 Ghz overlock! Look only a portion of these chips go into gaming machines! GPU Acceleration is the way to go of course but not always available or even reliable (See Vegas Pro 11.... lol).

There is not an integrated solution even in the pipeline from either company that will actual do something for a serious gamer. Source engine games, the Call of Duty series sure and thats actually a great thing that has worked its way down. I mean I recently tested that 6670 with BF3 which clobbers the i5-2500K's graphics solution (which blows even overclocked) and that is a better card than most integrated solutions offer up until this very point in time. Ask AMD if they see a mid range next gen chip making it into CPU's next year, the answer is no. I wouldn't bother asking Intel for truth about anything we still need to give them crap about that F1 demo.

The future it may be even in the near future you won't see any serious gamer playing on one given the budget for what they want is available. When you do the trend that has always existed for graphics performance versus game releases suggests you would end up replacing the CPU more often then ever before.

Intel purposely limits the available PCI-E lanes. They were forced with legal action to keep PCI-E in their product line. Current generation GPU's will find themselves plenty happy on x8 3.0. Intel is afraid of GPU compute power. If it was not valuable for everything from servers to supercomputing you would actually have a shot at getting 32 lanes from a low end consumer chip and be entirely unworried about PCI-E 3.0 bandwidth for quite some time. But that wont happen, its profitable in many arena's and they will take your cash as well.
 
Last edited:

Rob Williams

Editor-in-Chief
Staff member
Moderator
For a regular consumer, overclocking might not be that necessary. I don't even personally overclock my own rig, but it's mainly because I have six-cores which naturally boosts encode speeds already. I'd OC maybe if I had a beefier cooler, but it's not a major concern.

And yes, Intel does limit the PCIe lanes, but I think it's mostly done to fool those who don't know better into buying the biggest and best chipset out there.
 

Psi*

Tech Monkey
Rain & I are mostly in the same camp.

I use GPGPUs extensively but even then the host CPU has to be top notch. Not all time consuming processes are worthy of pushing thru a massive about of processors even with Nvidia giving the CUDA library away for free. Another way of saying this is that not everything can be parallelized so the fastest CPUs (cores) for all of those single threaded processes will always be important.

For those processes that do benefit from massively parallel processors, then there just are not enough PCIe lanes to maintain communication with the host. (The amount is a wild card dependent on algorithms & problem.)

What is it that Nvidia is promising? Kepler is triple Fermi and next is Maxwell promised to be not quite triple of Kepler (2013). (I am getting excited!!)

I other words, I really like over clocking and many PCIe lanes.

For the average consumer, a category of which I also fall into, I see an incredible capability of 3D interactive ... experience. Real world simulation is coming soon.

Last, I am not sure that current systems are intentionally dumb-ed down PCIe-wise. The current computer motherboards are inexpensive & I am thinking of the HP & Dell workstations. Expensive are the server & router systems that Brocade & Cisco crank out ... try 20+ layers. For the current Nvidia Fermi GPUs, one CPU socket per GPU is needed to keep the necessary PCIe lanes. Additionally, the host system requires 4X the RAM of the GPGPU card(s). At least for my software.
 

Jakal

Tech Monkey
Man id have to kill myself the time I would lose rendering models and encoding video by dropping my 1.2 Ghz overlock

I was speaking more from an overall perspective with current-gen options. You are right on the money when it comes to encoding and rendering. The higher the better.

It's one reason I said:
Jakal said:
That's not to say scooping up a cheap entry-level processor and putting the fire to it doesn't save you a bunch of money.

I have as much fun as the next guy pushing my components to reach a good overclock. It's like buy a car and seeing how fast it'll go. The thing about it is that most stock options perform well enough that pushing the limits, and possibly causing damage, are unnecessary. It's still fun.
 

Kougar

Techgage Staff
Staff member
Rob... IB-E is due next year. I don't have an exact date though... last roadmap I saw says Haswell in Q2 and IB-E in Q3. All I know is that Intel will not want to launch IB-E alongside Haswell, so those launches will be spaced apart.

The problem with IB-E is that it's only a tiny performance boost over SB-E. The successor Haswell-E (or whatever they plan to call it) will most likely not be compatible with X79, just as Haswell will require a new mainstream socket instead of Z77.

The issue with integrated graphics... you can double the amount of IGP hardware three, four, or even five times and you will still not come close to the performance of flagship GPUs. The discrepancy in hardware available is just too huge. Even if you look at AMD's Fusion design, they used a very trimmed down GPU core design for molding into their APUs. IGPs will certainly begin eating into the low-end GPU market but it will be quite some time before they reach the upper-midrange class of performance.
 

Tharic-Nar

Senior Editor
Staff member
Moderator
I wouldn't be so hasty in dismissing IGPs like we used to. Remember, they were on the north bridge for quite some time and a lot of their reputation was built on that very inefficient foundation. Now that the GPU is on-die, not shared die, but actually part of the CPU, then it certainly removes a lot of the bottlenecks for bandwidth. The final obstacle is memory bandwidth which is when things get interesting. Since the GPU is sharing space with the CPU, it has access to it's memory registers and pre-fetch technology. RAM quantities are through the roof with 12GB and 16GB becoming more common, even in laptops. There is DDR4 coming out soon with a minimum freq of 2133 MHz - and RAM frequency has been shown to have a major impact on on-die GPUs.

Now, don't get me wrong, these IGPs will not compete with the monolithic beasts we have as discrete cards in the top end, but I do see them easily absorbing the low end and even mid-range in the medium term. Intel specifically has another major trick up its sleeve too, something that no other company can compete with, manufacture process. Intel can make the chips much more powerful vs the same footprint of other chip designs. AMD still has an advantage when it comes to drivers, architecture and experience, an advantage that is patent locked. NVIDIA is finding its niche with mobility and scientific, so I'm not too worried about it lacking a CPU integration profile since it's buddying up with ARM based CPUs.

Look at it this way... the IGPs we have now are much more powerful than the discrete solutions of consoles, are significantly more energy efficient, and take up a much smaller footprint. That may not be saying much considering the 5-7 year gap, but from a consumer perspective, IGPs are better than a console, but with the added benefit of a full computer on top; they just need a glossy and comfortable wrapper to make the system feel like a console. Enter Valve's hardware project and Linux support.

Start pulling the threads together and you'll begin to see where it's heading.
 

Kougar

Techgage Staff
Staff member
I'm not dismissing them outright. I merely don't see IGP's taking over the gaming market anything within the next few years, to put it mildly.

The point about AMD's APU's showing performance increases with faster RAM also goes to my point. Changing from DDR3-1333 to DDR3-1866 RAM gave Llano a 20% boost in GPU framerates. This underscores that the IGP is still limited by its footprint and its requirement that it must wait on the CPU's' memory controllers, added latency involved, and any queuing involved when the CPU is using them.

As fast as the CPU controllers and RAM will be, they will never make up for having GDDR5 RAM directly next to the core with dedicated memory access. The latest quad-channel Core i7 3960X processor gets about 40GB/s of bandwidth (20GB/s for the 8-core FX-8150, or 25GB/s for the Core i7 2600K). Compare this to the 192GB/s of the GTX 680 or 264GB/s for the AMD 7970. It will take more than DDR4 and Quad channel memory to even begin to make up for the discrepancy... That said, Haswell / Trinity are only operating with just dual channel memory controllers, not triple or quad. Four CPU cores (especially with HT) can easily keep just two memory controllers busy.

Certainly midrange parts don't need nearly as much bandwidth, but the bandwidth requirement for games scale nearly exponentially as the resolution and detail settings are raised. APUs will not be able to remove this memory bottleneck any time in the near future... because the more powerful they get the higher the detail settings will be placed on them, and the more bandwidth they will need to sustain it. That's why modern GPUs are still so freaking big.

The second problem is more basic. The silicon real-estate required to make the IGP in an APU as powerful as a mid-range GPU would be significant. The only reason APUs have managed to shrink what they have is partly due to how stripped down they are along with the change in away from bulk fab production and the logic circuit design tricks CPUs employ. But none of that will magically shrink a midrange GPU core down to equal size as a quad-core CPU. They already removed compute hardware, threw out most of the shaders and who knows what else. The real-estate required is simply not there to put more than half a midrange GPU underneath the CPU heatsink.

The AMD 7770 is very midrange. It costs a mere $100 with a rebate today. It has 72GB/s of memory bandwidth thanks to GDDR5, and a die area of 123mm^2 By comparison a quad-core Ivy Bridge using a GT1 GPU core measures 132.8mm^2. Make it a GT2 core and it jumps to 159.8mm2. And that's with Intel's fabrication advantage already in play. There is no way even Intel can shrink things down far enough that they can squeeze in enough horsepower to challenge the midrange graphics market anytime soon.

So for both of those reasons I don't think midrange GPUs have much to fear for the next few years, if not the next five. The best possible case for APUs is when they become a true CPU/GPU hybrid. Duplicate sections of the CPU are removed (floating point engines, etc) and vice-versa with the GPU. Enough silicon would hopefully be removed to enough integer cores on the CPU side and shaders on the GPU side to allow for a sufficiently "beefy" GPU with quad-CPU to be built into a reasonable footprint. Even then, I think it will take 1-2 more years before they iron out the hybrid design and get it performing to its potential.... current roadmaps suggest it will be 2015 or later before we see the first genuine hybrids.

The point about modern IGPs trouncing consoles is fine, but six years ago chips were designed and built for the 90nm fab process. Take a 90nm Pentium 4 and build it on the 22nm process and it be something about 9% the size, assuming the transistors scaled equally. Prescott was 122mm^2 area with 125 million transistors. A 160mm^2 Ivy bridge chip is 1.4 billion transistors. So yeah, a modern IGP would trounce something from six years ago with that kind of scaling involved. At the end of the day, any modern APU is still not useful for more than gaming at lower resolutions or lower detail settings.
 

Tharic-Nar

Senior Editor
Staff member
Moderator
The other thing that needs to be considered here is that people have been happy with console games. Tablets and phones are now catching up to consoles too. Yes the resolutions are lower, but this is the critical point: It's good enough. With the gaming industry becoming platform agnostic, the average denominator is console grade graphics. It's slightly lower with tablets and phones, and sky's the limit with PC, but more often than not, PC releases are more of a sympathy production compared to the real money makers. Why should developers spend so many resources on a PC release with all the bells a whistles that can't be rendered in the vast majority of systems. They get brownie points, sure, but not really making a living. Not saying all games are like this, just look at MMOs, but there are not many exclusive titles these days.

With this console centric nature accounted for - IGPs will come into their own, simply because they are good enough to run games at 720p on an HD TV - but with a few extra effects thrown in like AA. Why did I say 720p? Because that is the resolution 95%+ of console games run at - 1080p on a console is limited to 2D and movies.

Next gen consoles are still at least a year or two off (not counting the Wii U). Even when they are released, they will not be staggeringly better than what's available now, since it'll still take time for the devs to get used to a new way of working. Sure, the new Unreal engine demo looked impressive, but how long did it take to create for something so short, and does it work on the new consoles?

What we are stuck in is not really a hardware problem - we are stuck in a software loop. IGPs are great for now, even the next couple years. The only strain a PC gamer can get would be from multi-monitor displays, stereoscopic, and HD texture packs. That's basically all we've got going for us in the high-end discrete GPU market.
 

Jakal

Tech Monkey
The only strain a PC gamer can get would be from multi-monitor displays, stereoscopic, and HD texture packs. That's basically all we've got going for us in the high-end discrete GPU market.

Great post Thar, and good point here specifically.

Consoles are offering even more connectivity options and simplify the gaming experience. I think this positively affects the laptop market. If all you need a computer for is documents, pictures, and web browsing, and that computer is portable, why get a bulky desktop system?

Apple has one line that offers the option for an add-on graphics, the Apple Mac Pro. There aren't too many people coming off the street to drop that kind of money for a desktop computer. The cheaper solutions are sufficient enough. The IPhone and IPad have done nothing but help people decide to switch. Once Mac started using Intel processors games followed suit in support. Mainstream Apple desktops have no add-on capabilty so users won't be going with any aftermarket card.

The enthusiast market will still thrive but I think we'll be the ones who pay for IGP advancement. Video card manufacturers will be producing fewer cards as integrated offerings are 'good enough' and we'll be seeing higher prices because of it.
 
Top