Intel CPU Architecture 'Could Scale to 1,000 Cores'

Rob Williams

Staff member
Ten years ago, the thought of a dual-core CPU didn't quite compute (pun intended) in our minds, because such products simply didn't exist. But today, multi-core processors are common in even the simplest of mobile devices, and on our desktops, some are lucky enough to have six cores at their disposal. For the most part, today's CPUs offer a great amount of power for most people, but what about the future?

Read the rest of our post and discuss it here!


Senior Editor
Staff member
Picometers are smaller than atoms (31 wide for a Helium atom), so i doubt it, lol. They'd probably strip certain redundant core logic and maybe scale them to multiple layers, since each core would consume little power, thus layers becomes feasible due to reduced heat.... unless they go the IBM route and integrate hollow tubes between layers for water channels. Anyway, these would be enterprise level, and part of Intel's terascale project, the Knights series of addon cards. Plus, they said it could theoretically scale to 1000, not a case of will, lol.

What i found interesting is the recommended removal of cache coherency at higher densities. This is in effect a process of shared system memory for each core, or at the terascale, distributed shared memory. This is done so that each core, or groups of cores doesn't start treading on each others toes, preventing redundant processing and memory corruption. When you start looking at scalable programming languages, ones that are designed for multicore and distributed environments (Erlang for example), these languages actually prevent the use of shared memory and editable variables, so it becomes impossible for these coherency issues to occur. The problem is that it requires a whole new level of thinking and designing.