The Quest for Useful Real-World SSD Benchmarks

Rob Williams

Editor-in-Chief
Staff member
Moderator
From our front-page news:
As I have mentioned in our news section in the past, we've been working on getting some storage-related content up on the site for a while, but as things go, issues keep arising, and it never seems to happen. As it stands, however, we're closer than ever to getting such content up, and I'm confident that you'll be reading some articles in the next few weeks to come, tackling both HDDs and SSDs.

If there's one thing we've learned over the past few months, though, it's that storage benchmarking is hard. Alright, let me elaborate. It's not that benchmarking is hard in general, but it's hard as heck to find appropriate benchmarks that people care about. Throughout all of our content, we strive to deliver as many real-world results as possible. I think it's fair that if we are to deliver information, it should be as realistic, and it should be information people can relate to.

But with storage, things are tough. Synthetic benchmarks will show benefits of faster drives fine, but when it comes to real-world tests, the task of showing the benefits of either HDDs or SSDs becomes very complicated. Believe it or not, the vast majority of scenarios we've tested simply don't show real differences in performance from drive to drive, as long as we're talking about drives with like speeds (7200RPM drives would of course be faster than 5400RPM, but even then, it's primarily the synthetics that would prove it).

For the past month or two, Robert and I (yes, there's another Robert in case you haven't noticed) have been working together on conjuring up the best possible SSD test suite. The problem, of course, is despite having a nice collection of synthetic benchmarks, we have almost no real-world tests, with the main exception being SYSmark 2007 Preview (ironically, still not a true real-world benchmark). In our CPU reviews, I focus a lot on applications such as image manipulators, video encoders, 3D rendering tools and more, but in all of our testing, performance differences between an HDD and SSD in either of these is non-existent.

As a specific example, our Adobe Lightroom test sees 100 RAW files converted to 100 JPG. You'd imagine that a test like this could push a storage device nicely, but that's not the case. In the end, the time-to-completion was identical on both an HDD and SSD. The same applies to video encoding. The problem is that neither of these scenarios push the kind of bandwidth we need to see faster drives, like SSDs, excel. But on the other hand, our scenarios are realistic.

intel_ssd_g2_press_shot_090409.jpg

What's frustrating about all this is that SSDs are fast, and offer real benefits to consumers. But for the most part, what we're seeing is that the benefits have exclusively to do with application loading, Windows loading, game level loading, and other like processes. Random writes are of course much faster on an SSD as well, but outside of a synthetic benchmark, it's very difficult to find a real test that can help prove that.

So my question to you all is this... what would YOU like to see us tackle in our upcoming SSD content? Is there a real-world test that you're confident would highlight the benefits of an SSD, or benefits from one SSD to another? Please let us have it, because we could sure use some ideas. If not, we'll have to resort to including some more mundane tests, such as SSD noise levels! *

* Yes - that was sarcasm.
 

2Tired2Tango

Tech Monkey
Believe it or not, the vast majority of scenarios we've tested simply don't show real differences in performance from drive to drive, as long as we're talking about drives with like speeds (7200RPM drives would of course be faster than 5400RPM, but even then, it's primarily the synthetics that would prove it).

Welcome to the age old problem... How do you test something that is faster than the test station you have to hook it up to?

Variuos Content Creation may look disk intensive --"wow that light's been on along time"-- but it probably isn't. Odds are that any program writing large amounts of data probably spends more time creating/modifying/preparing the data than reading or writing it. The drive light is on because the interface hasn't turned it off; the drive is not being continuously fed data... So this turns out to actually be a CPU/GPU/RAM test. (as you've already noted)

What you need are a series of real world tasks that are very drive intensive but make very little use of CPU (etc.) time.

The test you're looking for is not Content Creation, it's in Content Duplication.

Take a look at what most people do with files on a day by day basis... they download stuff, sort it, copy it, move it around, rename it, make backup copies, etc. ... these are very disk intense chores that are basically mindless in cpu terms...

Frankly I don't much care if my shiny new 27tb, low power, green, self-cooling, 1.3" SSD drive can save a 6 page email in 1.07 milliseconds or 1.08... But I am going to care if it's going to take 12 hours instead of 36 to backup my database files.

But then there's that bugaboo about the device under test being faster than it's test station... so you need to make it test itself... Try using batch files (yes, MS-DOS stuff) to copy on a standard 10gb kit of files, of varying sizes, all running from a single folder TEST_FILES\ onto the drive ... make copies of it until the disk is full. Now run batch files to delete and replace copies multiple times (the more the better)... Of course timing everything along the way...

I'm attaching a little proggy that will read all the files on a drive and test them for validity... it's pretty intensive and will give you a good idea of the sequential read speeds of a drive.

For IO devices it's always the long tests that are the most telling...
 

Attachments

  • FSScan.zip
    296.5 KB · Views: 536
Last edited:

Psi*

Tech Monkey
Here is a quicky description of the technology. I think the "Disadvantages" section could be a good area to focus testing in. Testing here is perhaps a bit pessimistic, but for what the drives sell for I would like to know what drive lifetime is. How quickly does a SSD slow to HDD speeds. And, is that even a pertinent question. Scanning the net suggests that it is, but ...

The concept of wear leveling could be explored and/or at least explained here.

Also, testing the saturation limits of the drive channel might be interesting. Interesting only in understanding what other reviewers don't get or understand. What really is the limiting issue ... the drive or the drive channel which I guess is SATA 6 GBits/s? Ok, that is confusing. Is this even available yet?

Real world tests for whom? A gamer? Most of that drive use is reading. Well, most drive use *is* reading data, but I believe that it is higher for a gamer ... what do they really have to save?

The worst case drive channel issue that I have seen is with mechanical CAD. The CAD models are really databases that are constantly being updated & read. A model that is an assembly of various parts hammers the drive ... but how much of Techgage's visitor care about this?

Related to this, I don't see how the drive can be faster than the test station. Or, perhaps I just misunderstand 2Tired's comment. This would be like saying that main memory is too fast test latency.
 

2Tired2Tango

Tech Monkey
Related to this, I don't see how the drive can be faster than the test station. Or, perhaps I just misunderstand 2Tired's comment.

Perhaps an example...

Lets say you are testing write speed by copying files from one drive to another. The fastest this can possibly go is the speed of the slowest drive. So if you are moving files from a HDD to an SSD, you are actually testing the read speed of the HDD not the write speed of the SSD. Since the SSD is several times faster than the HDD it's speed is limited by the HDD.
 

Psi*

Tech Monkey
Perhaps an example...

Lets say you are testing write speed by copying files from one drive to another. The fastest this can possibly go is the speed of the slowest drive. So if you are moving files from a HDD to an SSD, you are actually testing the read speed of the HDD not the write speed of the SSD. Since the SSD is several times faster than the HDD it's speed is limited by the HDD.
Ahhh, of course ... I was still tired yesterday from driving halfway across the country
 

Psi*

Tech Monkey
You've got it! 14 hours behind the wheel .. ugh :eek:... I don't like to think about it, but have done many times.

About reviewing SSDs, another app might be for movie downloads. A couple of years ago I built 1 computer dedicated for use as an entertainment center & it has had many movie downloaded to it. Older movies get deleted as newer ones are downloaded. No thought or attempt to defrag is ever made. These have been growing in size as higher def movies have become available.

Of course, HDD speed is of little concern for either storing or playing back movies, but this is an example of large file writing & erasure pretty often. So, my question would be, "if I had a machine with an SSD serving this role, when would it degrade to the point of being a problem? How many 20 GB or 30 GB writes would cause a noticeable slowing?"

If the answer is less than a year's worth of a couple of movies a month ... that would be a problem. If the answer is 5 years of downloading 4 or 5 movies a month ... then that would be great. The answer could even be 100s to 1000s of multigigabyte file erasures & writes with degradation being a benchmark only kind of measurement. Then the degradation issue, tho real, is still something of an internet myth.

Assuming that there would be real & meaningful degradation, how fast does it degrade the drive's performance? Is it a gradual slope? Is there some kind of "knee"?
 

2Tired2Tango

Tech Monkey
If the answer is less than a year's worth of a couple of movies a month ... that would be a problem. If the answer is 5 years of downloading 4 or 5 movies a month ... then that would be great.

Lets hope it's more like 5 years of downloading, moving, backing up and renaming 4 or 5 movies a day. If it's just going to spin down to HDD performance after 300 1gb files, then it's both cheaper and smarter to stay with HDDs.


Assuming that there would be real & meaningful degradation, how fast does it degrade the drive's performance? Is it a gradual slope? Is there some kind of "knee"?

Agreed... important information for anyone building disk intensive setups...
 

Kougar

Techgage Staff
Staff member
I apologize for the late reply, I've been letting this get away from me as I'm graduating in just a few weeks and things have been slightly hectic!

The issue at hand is trying to "quantify" that responsiveness an SSD will give. Everything loads just a bit faster and stays responsive regardless of what disk intensive programs might be running in the backdrop. It does focus on the end user experience, but still it's hard to capture and benchmark that snappy, responsive feel that any longtime (good) SSD user will report on... and even harder to benchmark that feeling of having used a SSD before going back to a regular hard drive based machine!


2Tired, thank you for your thoughts! The batch files idea is one I've been working on for a few weeks but it's not been without its own set of problems. Unfortunately so far it has been our only real lead.

Regarding your own FC Scan program, here's some numbers for ya when scanning only the Windows folder+subfolders on an identical install: HDD: 2:26 X25 Gen1: 1:06

Programs like this that spend most of their time running random 4kB or smaller file access patterns stand to gain the most benefits from an SSD, it's the best case scenario for an SSD to be honest. I tried doing a full drive scan but it would only scan the root directory and ignore the subfolders.

Psi, testing things like wear leveling and attempting to wear out an SSD are just not possible, even with a program writing data constantly it would take several months before any sectors started getting worn down enough to go bad. Intel rates their drives for 20GB of data writes per day for five full years. I tend to believe this estimate as they warranty these drives for five years and have likely done the testing before offering such a warranty. Considering that so many HDD's don't last this long, I don't personally think SSD longevity is an issue with Intel's drives.

You should always check those wiki sources, in the same disadvantages section they cite almost ALL of Anandtech's own articles and PCPR's original article on SSDs. The original PCPR and Anandtech articles document the slowdown effects well, but it's the newer reviews on these sites that show Intel's Gen1 firmware upgrade mostly mitigated the problem... and that TRIM support fixes it for those drives that support it. If you wrote 80GB to your X25-M Gen2 drive, deleted it, and wrote a new file it would write at the same speed as it did originally.

Some drives should be able to exceed the SATA 3.0 6Gb/s in read speeds, and certainly in Burst speeds. If you look at any recent review that offers a large chart of SSD to sample from, then it is pretty clear pretty much all the best SSD's top out at ~265MB/s, yet PCIe based devices don't have this issue at all.

Back to write leveling and write amplification, these are valid issues to address and each SSD controller handles things differently based on it's own programming and optimization algorithm. While write amplification will decrease the lifespan of a drive, wear leveling does the opposite by evenly distributing writes across all sectors of the entire drive. Most midrange to higher-end SSD's actually include several GB's of spare capacity, some to double as backup sectors (like a traditional HDD), but also to double as more area to spread writes across as the drive fills to capacity.
 

Psi*

Tech Monkey
I apologize for the late reply, I've been letting this get away from me as I'm graduating in just a few weeks and things have been slightly hectic!

The issue at hand is trying to "quantify" that responsiveness an SSD will give.
.
.

Psi, testing things like wear leveling and attempting to wear out an SSD are just not possible, even with a program writing data constantly it would take several months before any sectors started getting worn down enough to go bad. Intel rates their drives for 20GB of data writes per day for five full years. I tend to believe this estimate as they warranty these drives for five years and have likely done the testing before offering such a warranty. Considering that so many HDD's don't last this long, I don't personally think SSD longevity is an issue with Intel's drives.

You should always check those wiki sources, in the same disadvantages section they cite almost ALL of Anandtech's own articles and PCPR's original article on SSDs. The original PCPR and Anandtech articles document the slowdown effects well, but it's the newer reviews on these sites that show Intel's Gen1 firmware upgrade mostly mitigated the problem... and that TRIM support fixes it for those drives that support it. If you wrote 80GB to your X25-M Gen2 drive, deleted it, and wrote a new file it would write at the same speed as it did originally.
.
.
Back to write leveling and write amplification, these are valid issues to address and each SSD controller handles things differently based on it's own programming and optimization algorithm. While write amplification will decrease the lifespan of a drive, wear leveling does the opposite by evenly distributing writes across all sectors of the entire drive. Most midrange to higher-end SSD's actually include several GB's of spare capacity, some to double as backup sectors (like a traditional HDD), but also to double as more area to spread writes across as the drive fills to capacity.
Respectively, I disagree. ;)If it were not possible to measure, then how could a designer know what to design for? There is a rate. It can be discovered by those who pick apart what has been designed & is a question of time as every code can be broken if not leaked.

What I have read someplaceis that the SSD degrades in responsiveness. It is not a sudden catastrophic failure as with a HDD nor due to file fragmentation.

I would like to find that Intel Warranty to see what exactly it does cover. I am curious tho it starts 220% faster, when does it drop to say 200%, or 150% faster than a HDD. If that warranty says the SSD is 220% faster at 5 years, then that would be just too cool & $400 for the current SSDs is no big deal.

So, I think that even tho it sounds as if "20GB per day for 5 years" sounds impressive, there is nothing that suggests that the read/write speeds are maintained at the original levels ... in fact it is suggested that it decreases at some rate. That is all that I am about, "what is that rate?" If it is only measurable with a benchmark program, fine. But it is quantifiable. It can always be quantified.

A reference which I have still been trying to find ever since I 1st found it, describes a SSD as a kind of RAID (my analogy). And you touch on that with the reference to several GBs of spare capacity ... is this not similar to a RAID HDD stack?

Congratulations on graduation!!!
 

Kougar

Techgage Staff
Staff member
Respectively, I disagree. ;)If it were not possible to measure, then how could a designer know what to design for? There is a rate. It can be discovered by those who pick apart what has been designed & is a question of time as every code can be broken if not leaked.

That's fine to disagree, just be warned I stick to my guns! Which part exactly are you referring to here, so I can follow you better?

If you're referring to that responsiveness I was talking about, that is inherent to how flash memory works. Hard drives have a access latency of around 9-14ms milliseconds. Intel's latest X25 has a 0.065ms latency. That's not exactly something that shows up during a benchmark because you only load the program once, but if you were personally using the machine (and are sensitive to program load times, such as a 30 second load would be enough to annoy you) then it gives the PC a much more responsive feel.

What I have read someplaceis that the SSD degrades in responsiveness. It is not a sudden catastrophic failure as with a HDD nor due to file fragmentation.

I would like to find that Intel Warranty to see what exactly it does cover. I am curious tho it starts 220% faster, when does it drop to say 200%, or 150% faster than a HDD. If that warranty says the SSD is 220% faster at 5 years, then that would be just too cool & $400 for the current SSDs is no big deal.

Yes, and what I am saying is that was fixed by Intel. If you use a Gen2 X25-M on Windows 7 the drive will give you solid performance throughout its life. Once you have a mostly full drive it should knock a few percentage points of sustained writes off the drive, but it does not affect access latency nor read times. The general slowdown resulting from drive fragmentation is not really an issue either for a capable SSD. And it certainly won't become "like HDD" performance, that's just plain wrong unless you're looking at old JMicron and Samsung controllers. Some of the original JMIcron drives were worse than a hard drive under certain conditions.

Try and find whatever article you remember reading that from... Rob may have good reason to kick my butt on this one, but any major site that's benchmarked this issue has put out newer articles showing it's generally a non-issue for SSD's that can run TRIM under Windows 7. Techgage will soon have something to show this as well. ;)

A reference which I have still been trying to find ever since I 1st found it, describes a SSD as a kind of RAID (my analogy). And you touch on that with the reference to several GBs of spare capacity ... is this not similar to a RAID HDD stack?

Not at all. Some SSD's do use RAID, they have 2, 3, or 4 SSD's RAID'd inside of them in order to deliver higher speeds. But even your traditional hard drive has an extra GB or more of capacity you don't see and can't use. The hard drive uses this to remap bad sectors... and an SSD does basically the same thing. Platters aren't magically "sized" to exactly the same size as the number on the hard drive box, the company selects a size and number of platters they wish to use at any given size point that's close, and any extra is marked for spare sectors. As I understand it the same is done with NAND flash.

Thanks for the congrats! I'll be breathing easier when the finals are finished nonetheless, though!
 
Last edited:
Top