Unregistered said:
Note: Might want to come up with a better name than GPGPU, it's a bit of a misnomer now. Maybe General Purpose Media Processing Unit (GPMPU)
I'm not sure that's entirely appropriate, because there have been numerous non-media-related uses for GPGPU as well, such as computing complex algorithms, password cracking, and even earlier this week, we learned that Kaspersky was using GPGPU for faster virus detection and heuristics. Not all of these seem that exciting, but there's a reason some companies have begun building "super computers" with GPUs, and they're not doing it for media purposes.
Unregistered said:
I don't think I was totally off base, but Intel may want to create an entirely seperate processing unit like the FPU from the days of the 386. It would be composed primarily of multiple SIMD units.
That's one option, but that wasn't exactly the smartest design, either. I once was talking to an Intel engineer about that exact design, and he was embarrassed to even talk about it (I am uncertain if he was one of the engineers on that project). Things could be different today, thoguh.
Unregistered said:
It would be the first stage towards eventually moving SIMD functions off onto a dedicated chip for media processing with the CPU issuing VLIW to the GPMPU to maximize performance. I could maybe come with more reasons for doing it this way, but I would end up rambling on and on and on...
I admit I'm intrigued by the idea, but I'd be hard-pressed to believe Intel hasn't taken a look at such options. I'm no engineer, and you seem to be better versed on the entire subject than I am, but it seems to me that Intel was doomed from the start with an x86 direction. I can understand why they took the route, but I shudder to think of all the R&D dollars that were "wasted" during the entire Larrabee project, and what do we have for it now? Not much. Intel will supposedly release a software counterpart, but I don't see that having much use, either. What use
could it have? What game developers are going to want to use some library built around a non-existent GPU architecture, from a gaming that's never been known to produce quality GPUs?
Unregistered said:
Nvidia does offer GPGPU programmability using C (or C++) for their products through their proprietry CUDA language and will probably support more open GPGPU language like OpenCL and Microsoft's DirectCL.
The problem is that NVIDIA's GPUs aren't native x86, while Larrabee would be. As a native x86 card, developers would be able to code in the manner they're used to, while getting the best performance possible. NVIDIA offers support for C/C++ and others, but not all of them have the same level of end performance that I'm aware of. If you take a look at the latest version of SANDRA (2010) and benchmark using the GPGPU tests, you can test using CUDA, OpenCL, Stream, DirectComputer, et cetera, and you'll see that all of these perform entirely different. For both AMD and NVIDIA, their respective Stream and CUDA perform the best.
Intel doesn't have a "Stream" or a "CUDA", but they do have IA, which developers are already familiar with, and as such, Larrabee would be able to offer great performance for OpenCL and perhaps others with perfect C/C++ support, because it's not on a different architecture. I'm also not too sure that Intel would be building Larrabee with a lot of what makes an x86 a desktop chip... I'd expect a lot to be tweaked and altered in order to fit as many of these cores into a single chip for graphics use.
Unregistered said:
The possibility I see is Intel trimming down the CPU to a minimum of 2 core with hyperthreading or ultrathreading (>2 thread per core), memory and I/O hub and huge amounts of cache (trace, L1, L2 and maybe even more) while the "physics engine" might comprise of various concepts, design and technology used in SIMD and Merced.
Are you saying that ultra-threading would be ideal for game use? I admit it's not something I thought about before, and I'm not quite sold on the fact that it would improve gaming. I still believe we need a lot of fast cores, not just a handful with HT/UT. The sad thing is, that because Intel's x86 architecture is not designed for gaming, they have a serious roadblock because the inefficiency is just insane, and I believe that's what killed the project for now. Seeing the Larrabee demo at this past IDF was depressing... we all hoped to see something a lot cooler by that point in time.
Unregistered said:
One of the problems in designing systems is that when some key technologies reach a certain level of maturity, the design of the whole system needs to evolve to move forward. A paradigm shift is required.
One side of my brain tells me that Intel should have started completely fresh from the beginning, foregoing an x86 design, but the other side says that what it was doing was cool, because they had compatibility in mind... a GPU that could offer excellent gaming performance and exceptional computational performance.
Unregistered said:
WHAT IF... Nvidia starts selling a complete solution... replacing the Intel processor with their own processor, the Ion. First the supercomputing world then it would start to trickle into the home... The CPU doesn't need to be super fast, just very good at managing/scheduling I/O...
If this happens, it's going to be extremely interesting, because it will be a major shift from what we're used to. I can't see that happening for a while, because GPGPU hasn't evolved to such a point yet. Plus, like how Intel has little experience with GPUs (compared to the others), it's unlikely that NVIDIA could produce a CPU that would be competent for even modest use for a little while. I could be wrong, and if I am, it's probably because NVIDIA has been working on something for a while (but no one outside of NVIDIA would be aware of it, or at least few).
Unregistered said:
What would you end up with... how about a world-class supercomputer for well under $3000 for the home. Well, technically... today's personal computers are probably faster than most of the supercomputers build less than a decade ago. Today's PC technology is outpacing today's supercomputers, and tomorrow's supercomputers will be personal home computer. It's been done... 'nuff said
I'd love to see this happen, but I think it's going to take a while if it does. I have a feeling that the next couple of years are going to be extremely interesting where processors in general are concerned.