Putting Four Intel X25-E Together in RAID 0

Rob Williams

Editor-in-Chief
Staff member
Moderator
From our front-page news:
Late last week, we linked to Tech Report's article on ACard's DDR2-based storage drive, which for the most part, was quite impressive. It was lacking the overall density we'd like to see, but it was fast, even keeping up in many tests to Intel's X25-E. Speaking of... happen to ever wonder what performance you'd see if you paired four of those together in RAID 0?

Well, Geoff apparently has, and he's put his ideas to paper and has some results to drool over. The first thing to consider is that Intel's X25-E is one heck of a drive to begin with. Although it's available only in a 32GB flavor, it offers 250 MB/s Read and 170 MB/s Write speeds... also known as really freaking fast. Pairing the four together didn't exactly quadruple the performance, but what was found is still rather impressive.

In the HD Tach synthetic test, the configuration achieved 560 MB/s Read and 515 MB/s Write, which, while not quite 4x, is still rather insane. At that consistent Write speed, you'd be able to fill up the entire 128 GB array in just under ~250 seconds. Real-world tests enjoyed pretty sweet gains in performance as well, over the single drive configuration, but for this kind of eye-popping performance, you're really, really going to want to have a good reason to part with the near $2,000 required to get the configuration going.

intel_x25e_techreport_012709.jpg

While the X25-E's dominating single-drive performance would surely satiate most folks, its target market is likely to seek out even greater throughput and higher transaction rates by combining multiple drives in RAID. The performance potential of a RAID 0 array made up of multiple Extremes is bountiful to say the least, and with the drive's frugal power consumption and subsequently low heat output, such a configuration should cope well in densely populated rack-mount enclosures. Naturally, we had to test this potential ourselves.


Source: Tech Report
 

Greg King

I just kinda show up...
Staff member
Holy shit. This is amazing. Even if a tad expensive.

Price aside, 500+ write and read is incredible.
 

Kougar

Techgage Staff
Staff member
Yeah, but almost every program or application seemed to take a performance hit, which is just odd.

That RAID controller is the best I know of, 1.2GHz dualcore, high I/O model from Intel... the only thing left would be to double the 256MB cache to 512MB but that wouldn't of changed much in the end run. The results were extremely disapointing... the only thing that setup seemed to perform as expected at was with file storage/copies/creation. But who would buy four SSDs and RAID 0 them if all they were going to do was use them to store music or videos?

Even the content creation or drive speed tests, not even the burst test was 4x that of a single drive. At best that 4x RAID 0 setup was performing like a 2x or 3x RAID 0. I still wonder why.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Your concerns are valid, that's for sure. At least in synthetic benchmarks, you'd expect to see close to 4x gains, but it's as though there's another bottleneck. Could it be the S-ATA bus? Heck, could it even be the processor? At that point, maybe the processor becomes a small bottleneck. Their test rig is using an old P4, so maybe we'd see slightly different results with a newer machine. Maybe not though, I haven't ever really exhaustively explored this area.
 

Merlin

The Tech Wizard
I'm seeing the small cache's ( 64 ) bit, not clearing out fast enough for the next set of write commands.
Seems to be the bottle neck there
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Hard to believe that 64MB of Cache would be the bottleneck, although anythings possible. I'm not an engineer. It's even more humorous since 32MB of Cache has only recently become the standard on mechanical drives, hah.
 

Kougar

Techgage Staff
Staff member
While I am curious as well, in theory the processor should not be the bottleneck because the RAID card handles the additional load I believe. And the SATA bus shouldn't matter because each drive has its own cable direct to the RAID card... I wonder if it is the northbridge or southbridge running the PCIe RAID card, the southbridge could be bottlenecked by the DMI interface.

I'm seeing the small cache's ( 64 ) bit, not clearing out fast enough for the next set of write commands.
Seems to be the bottle neck there

I don't understand how that is possible, the card has 256MB of DDR2 cache memory. Which tests led you to this conclusion?
 
Top