Somewhere in a closet I have a publication Motorola put out about 80286/68020 comparisons that it said Intel was fudging; I wish I could dig it out.
You have to give credit to Intel for one thing... few other semiconductor companies have been accused of cheating for nearly 30 years ;-)
And of course we all remember the Intel compiler that checks for the availability of certain instructions in a way that makes sure they won't be used for AMD processors that in fact support them, even though they set the appropriate capability bits to indicate their presence in a proper test.
That's one reason I've come to appreciate SPEC's CPU benchmark, since it's architecture-agnostic. Nowadays, though, Intel has the biggest advantage by being the first to market with certain instruction sets, while it seems that those that AMD introduce don't get used in commercial applications. SSE4 is a big example of this, where
huge gains could be seen with video encoders that could take advantage of it.
Until you can test it in house NO. There is a difference between Faith and Blind Faith. With out a specific reason and real data to back it up with a 3rd party to back it up, there is no reason for me to take this on Blind Faith from AMD. They are a good company for what they do but they do not deal in the truth and I do not believe any one company with out proof.
Our copy will be here before the weekend; I'm not sure what's taken it so long. Due to the AMD Lynx launch right around the corner, I am not sure how much time I'll have to dedicate to testing it out, but I hope to tackle it soon. BAPCo told us that a whitepaper would be released soon, so I'm looking forward to looking through that.
First off, thanks for letting us reply without registering.
No problem; thanks for taking the time to post!
I'm going to side with AMD here to a point. My biggest argument is what do they suggest? I do think since this is a test for businesses that this test should only be published with a required disclaimer saying it's for business applications and the scores do not reflect what average consumers would experience. Because AMD is right, consumers should not look at these results and use it to compare two CPUs to decide which is better. I have not used FineReader OCR since 2001 (FYI it worked great back then!) and I would not care how well a CPU does OCR.
I couldn't agree more on the latter point. While it does have its niche market, most regular consumers don't even know what OCR is, and if they did, they likely wouldn't have a need to use it. So to include it in a benchmarking suite that aims to give consumers a good idea of what their PC is capable of, it's a bit strange. It'd be like Toyota giving us stats on how well its Corolla handles driving on molten rock... it's just not that relevant for most people.
As a consumer I am concerned about internet functionality. Different versions of flash, silverlight and JavaScript should be tested. Google Chrome, Opera and different versions of IE should be on the list. iTunes is missing too. Transcoding is probably included in After effects, if not it should be on this list too
To be honest, I think some of the things mentioned here are a little more specific than this benchmark intends to be. If SYSmark had the goal of dividing up the scores at the end further to explain which browser performed the best, that might be a little different, but this is a benchmark designed to last for a couple of years. That's where things get tough, because SM2012 releases with IE8, when IE9 is available... and when this benchmark is retired, we might well see IE11. At the same time, the IE choice in general is a little questionable.
I agree on the general idea though. Lots of people use things like iTunes, so that test (or one similar) should be included. The same goes for transcoding and the like, though SM2012 might touch up on that a little. With what you've mentioned, I think PCMark 7 is the better benchmark for that kind of consumer-level use. It tests out some Web technologies in a custom browser, and also tackles video conversion, image manipulation, et cetera.
That all said, outside of suite usage we use individual benchmarks in our own testing for all the things mentioned above. We don't use iTunes, though we might implement dBpoweramp for the sake of encoding later (unless we can see that iTunes proves in itself to be a reliable, repeatable benchmark).
All they really had to do was get in contact with Intel (which oddly enough, they DO know how to work together with) and ask them to add their CPU IDs to the list or offer them an alternative piece of code for processor detection for their compilers which would make it possible to identify AMD processors capable of running those pipelines. Instead, some whiner baby ran to the press and cried about how Intel was being so mean because they didn't bother testing to see if their competitor supported certain instructions.
Add their CPU IDs to what list? Intel's compiler? I am still not sure this is the biggest issue here, as Intel has instruction sets AMD does not, and vice versa (it's just that no one seems to take AMD's seriously from what I can tell, or Intel's are generally more important / built better). AVX in particular is one of the first times I recall where both Intel and AMD architectures will include the same instruction set within the span of a year.
Thanks for the comments folks. I wish I could comment on more, but I am not familiar with all of it.