Steve Scott is obviously correct. However he comes from the programmer side, while Intel easy talk is that of the seller whose eager to demonstrate the benefits of this new technology they are wanting to sell like water in the desert.
Of course the notion such a specialized parallel architecture to be ported to the new hybrid architecture by simple recompilation is just for stock holders to hear. Any software developer will know this is sales talk. We in fact have been hearing stuff like that all our lives. It doesn't even register in our brains anymore. We have been using so-called multi-platform libraries that demand slight changes in code to either optimize the code (or even make it work) on more than one system, for instance. We learned to accept the idea that we need to populate our code with pragma directives and other "niceties" to make it work across more than one platform. Programmers most quoted mantra is "there's no free lunch". For a reason.
The write once, run anywhere delusion is debated even on those places where it was specifically meant to work. If there is something any Java programmer learns very early is that there's a difference between what they were told about Java and what they will actually be doing. And this has never been more true with the inclusion of mobile applications that all but completely removed from the programming language the nerve to keep using the "compile once, run anywhere" slogan.
.....
Now, there's however something that must be said in lieu of Intel.
The grunt of this work will not be of the software developer. But that of the compiler or library developers. For most programming languages, these two components abstract away from the software developer all the innards of the system they are programming for. So from the perspective of the software developer (the "customer" of compiler and library developers) there is indeed very little reason to expect things to change. They can just recompile their code with the necessary flags and be done with it.
But the compiler -- and some of the libraries -- developers will need to rethink their code and adapt it to the new architecture, regardless of what Intel may want to say to please its board of shareholders. Those who don't, will be forcing onto the software developer the task of adapting his code. Which will only defeat even more Intel's argument.
The higher the programming language, the less a software developer usually needs to worry. It's quite safe to assume for instance that C# developers won't be seeing any change in semantics or syntax and can thus just recompile their code with whatever flags are needed. Microsoft is bound to give them peace of mind on this regard and completely abstract away from them this new architecture. Or so we should hope. Especially in the light of the .Net big changes over the past 2 versions concerning parallel programming (which I will be discussing on an upcoming article here on TG). C++ developers, on the other hand, will probably be left with changes to their current libraries or compilers that will require small adaptations in the code in order to take full advantage of the new architecture.
What I think Steve wants to stress is that it is quite acceptable the idea that we may be able to indeed just recompile the code (although even this is left to be seen). But we won't be taking full advantage of it. What's the point of programming for a redesigned architecture that wants to make instruction flow more efficient on a processor, if we are then producing less efficient code on purpose? Of course that on HPC systems we will want to code in ways that extract the most juice of our systems! And for that we will be taking a close look at the parallel architecture of the platform and take the most advantage of it. That involves tweaks and optimizations to our code. The idea that this new architecture can be used just with a recompilation of existing code isn't just unacceptable, but also slightly offensive to any software developer whose task is exactly to make the most of an HPC system.