Thanks for the follow-up comments.
You may be one of the more serious gamers, and I respect that, but I still stand by my opinion that people who game at lower resolutions don't need an expensive graphics card. In the example I gave, I wasn't talking about barely-playable settings, but fully-playable settings. Sure, the gameplay wasn't as buttery-smooth as a 9800 GTX, but realize I'm talking about the 4.1 megapixel resolution of 2560x1600, not the 1.3 megapixel resolution of 1440x900.
If that $125 card (at the time) could handle the-then top game at max resolution, I'm confident that any GPU in the $150 price-range today is going to be stellar for anyone on a lower resolution. Games haven't advanced so much since then that my opinion would change. If you are looking for anti-aliasing, things might vary a bit, but I'm still doubtful that a huge graphics card is needed. Again, I played CoD 4 with 4x at a monster resolution with a $125 GPU... things have only gotten better since then (from a price/performance perspective).
Just take a look at some of the results from the lower-end cards in our graphs, such as the $100 HD 4830. It managed to deliver 37.051 FPS in Crysis Warhead (Mainstream) at 1920x1200, 36.5 FPS in Call of Duty: World at War at 1680x1050, 30 FPS in Far Cry 2 at 1680x1080, 88.324 FPS in Left for Dead at 1680x1050, 52.484 FPS in Mirror's Edge at 1680x1050 and 58.101 FPS in Need for Speed: Undercover also at 1680x1050.
Realize that these are higher resolutions than 1280x1024 or 1440x900 (1680x1050 is 36% more pixels than 1440x900), so the only place that the average FPS has to go is up. That's on a $100 graphics card, and each one of those resolution/setting combinations (all featured 4xAA except for Crysis) were completely playable, with the possible exception of Far Cry 2, which should have around 40 FPS to become fully playable. I really, really don't understand why a bigger GPU is needed for resolutions even lower than this. If you don't see things that way, we can simply agree to disagree.
As for our numbers not matching up to other websites, this is an issue I'm investigating and take seriously. It's puzzling, but I'm starting to wonder if NVIDIA made a slew of really worthwhile optimizations in their drivers, because as of a few months ago, their cards didn't dominate the charts as such. We posted an "HD 4870 1GB vs. GTX 260/216" article in December, with almost an identical selection of games, and the performance from both cards were near-identical. So, something is up.
We're looking into revising our GPU game suite once again, this time being careful to not choose games that favor one maker's cards over another (unless it happens to be a blockbuster game, because at that point, it wouldn't be fair to not include it). It's difficult though, because no one says that one company isn't going to optimize their driver to the nines for various games we use. It's a real problem though, especially given the amount of time and effort that goes into re-testing each card (we benchmark 100% manually, we never use timedemos, save for 3DMark).
Trust me when I say that we don't have an agenda, and I wanted nothing more than to see the HD 4890 perform more competitively throughout our results here. If you only knew the hassle that went into wrapping this article up (and the reason we were a full day late in publishing), you would realize that we aren't about to favor NVIDIA just for fun (that's as much as I'm saying on that particular matter).
We report things as we see them, and if there are any inconsistencies with our results, it could be due to driver optimizations for a particular title we're using. If one site happened to have different results for a title we used, it will be due to either a different level being used (or method of testing), or the results were achieved with a timedemo. As I mentioned (and as our testing methodology page points out), we manually benchmark each game and shun timedemos, in order to deliver the most accurate results possible.