Is RAID Almost a Thing of the Past?

Rob Williams

Editor-in-Chief
Staff member
Moderator
From our front-page news:
We've gone over the importance of keeping your data safe many times over the life of the site, but one method of doing so, RAID, we haven't talked about too much. RAID, in layman's terms, is the process of taking more than one drive and making your data redundant, so that if a drive fails, you can easily get your machine back up and running. RAID can also be used for performance, or both.

For those who use RAID for redundancy, though, a writer at the Enterprise Storage Forum questions the future of the tech, and states that it could be on its way out. Back when hard drives were measured mostly in gigabytes rather than terabytes, rebuilding a RAID array took very little time. But because densities have been constantly growing, and speed hasn't been at the same rate, rebuilding an array on a 1TB+ takes far longer than anyone would like... upwards of 20 hours.

What's the solution? To get rid of RAID and look for easier and quicker backup schemes, or something entirely different? The writer of the article offers one idea... to have a RAID solution that doesn't rebuild the entire array, but rather only the data used. Still, that would take a while if your drive is loaded to the brim with data. Another option is an improved file system. Many recommend ZFS, but that's only used in specific environments, certainly not on Windows.

This question also raises the issues with Microsoft's NTFS, and we have to wonder if now would be a great time for the company to finally update their file system (chances are they are). It's not a bad FS by any means, but with this RAID issue here, and SSDs preparing to hit the mainstream, it seems that we're in need of an updated and forward-thinking FS. My question is, if you use RAID, what's your thoughts on it? Do you still plan to use it in the future, despite the large rebuild times?

raid_092809.gif

What this means for you is that even for enterprise FC/SAS drives, the density is increasing faster than the hard error rate. This is especially true for enterprise SATA, where the density increased by a factor of about 375 over the last 15 years while the hard error rate improved only 10 times. This affects the RAID group, making it less reliable given the higher probability of hitting the hard error rate during a rebuild.


Source: Enterprise Storage Forum
 

gibbersome

Coastermaker
So this would only affect people looking for better fault tolerance while not sacrifice performance.

RAID 1+0 config is used at the lab at work. We have so much data we work with on a regular basis that if a faster solution was on the table, we'd most likely switch to it.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
RAID 1+0 config is used at the lab at work. We have so much data we work with on a regular basis that if a faster solution was on the table, we'd most likely switch to it.

I'd guess that RAID 1+0 is going to be the best for redundancy and performance, but SSDs might change that in the future, if overall density isn't that important. For me personally, I don't run RAID, but still keep things up to date via scripts. I have three copies total of everything important. I don't get huge speed as I would with RAID 0, but that's not quite as important to me as avoiding RAID entirely ;)
 

erick.mendes

Obliviot
Future of RAID

I don't think RAID should be retired anytime soon, as it's one of the best ways to keep data safe and/or faster.

Sure rebuilding times is getting bigger, but we are also close to a new generation of interfaces that the next instalment of SATA must bring altogether, as the enterprice market won't be left behind (read it as "they are going to make something EVEN faster than SATA 6gbps and also MORE EXPENSIVE for enterprise level").

So when we get SATA 6gbps, rebuild times is also going to drop.

I think the truth is that any volume that is worth keeping in RAID 1+, must also be worth the time to rebuild it.

RAID got it's value proved over and over, and I belive it's not going under history's carpet too soon, and I would dare to predict that it will become the basis of a better technology that will keep or data more safe/faster, and, as always, for a fraction it would cost using other techs.

... One thing also came to mind now... While two of the same be more/better than one of the same, RAID will live ; )
 
Last edited:

Krazy K

Partition Master
I have a 1.1TB RAID 5 and I woudn't trade it for the world. I know that no matter how cruel I am to the OS and how cruel it is to me I will always sleep soundly because I know that I have a hardware RAID and it has worked everyday since I implemented it.
 

Psi*

Tech Monkey
I had built a system around 3 WD "RE" drives & setup with hardware RAID 1+0. After 2 years this past spring, one of the drives had a sudden death failure and nothing was recoverable.

Since nothing else failed and this system was a last experiment with RAID, I will probably never go there again. The performance gain was barely measurable, BTW.
 

Merlin

The Tech Wizard
Never used RAID, I didn't see a need
And..... why waste a drive when you could be storing other things on it
Any really important files or documents you can burn to BlueRay or DVD
 

Kougar

Techgage Staff
Staff member
I don't think RAID is going anywhere... I won't ever use RAID on my desktop again after having it proven to me that even Intel's Matrix Storage RAID 1 array can be magically "lost" without any advanced warning... one reboot later and my RAID 1 array was unrecoverable beyond advanced disk recovery tools, but the array itself was hosed.

I operate a RAID 5 array on my NAS, and I think RAID i's the perfect complement for a NAS. Short of a fire or a power surge that is capable of frying my backUPS surge protection device or some other cataclysmic event I no longer have to worry about RAID arrays vanishing on me, and I also don't have to worry about ever losing my data again.

Never used RAID, I didn't see a need
And..... why waste a drive when you could be storing other things on it
Any really important files or documents you can burn to BlueRay or DVD

Cheaper to just buy an extra drive, but that's my opinion since I deal with a half full ~800GB RAID 5 array. Not to mention it's infinitely more convenient, data is always up to date and instantly available.
 

Psi*

Tech Monkey
RAID makes sense in the original context that I remember hearing about it, and this was several years ago which is part of the context.

1) The whole mechanical thing with HDDs was much slower then. Parallel a pile of them & have significant improvement. As HDDs access time has improved the cross over of number of HDDs striped, reduced so that this was not such an advantage.

2) The other part was cheap drives with various kinds of proactive preventative maintenance measures. If for nothing else, then just hot swapping drives after a certain run time.

As far as 1) is concerned. My experience with the drives I had was absolutely no change in HD performance. I beat to death several HDD benchmarks /w & w/o RAID. Faster drives & buffers ... big sophisticated buffers on the drives as well as OS cache, I think.

And 2), I knew that I was never going to just pull a perfectly good drive to swap it with another. I thought that I would have gotten some warning. NOT! I guess part of that was that the drives I used were not really that cheap & they do have a 3 or 5 year warranty. It has been about 3 years, but I'm not going to mess with it.

FWIW The system is in Florida. It was on a UPS. Florida gets plenty of thunder storms so a UPS is an obvious thing. The computer is not heavily used tho it was on 24x7. One night power went out, the UPS did allow the system to shut off politely. But, upon powering up we had a drive failure ... completely failed & hosed all of the movies & games ... er data.:eek:

So there, understanding the purposes of RAID for which this system met none. I went ahead & did it anyway. The option is in bios & I was itching to try it.

With fast drives & various was of accomplishing automatic backups I see little point to RAID other than commercial servers and that is a whole different case and qualifies for the 2 conditions I cite.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
erick.mendes said:
So when we get SATA 6gbps, rebuild times is also going to drop.

Well, that depends. S-ATA 6Gb/s is going to offer more bandwidth as a whole, but mechanical hard drives themselves aren't going to exactly double in speeds. In fact, even current S-ATA 3 mechanical drives barely touch half of what the S-ATA 3 bus offers. Here, it's a hardware limitation, and has little to do with the bus. S-ATA 6Gb/s will increase the speed of RAID rebuilding, but certainly not to such a great extent that its really noticed.

Psi* said:
I had built a system around 3 WD "RE" drives & setup with hardware RAID 1+0. After 2 years this past spring, one of the drives had a sudden death failure and nothing was recoverable.

It's stories like this that cause me to stay away from RAID. Of course, it helps that my first experience with RAID was a bad one. I didn't at all understand how it worked... I figured I could just toss in a drive and clone it, but how I did it, it wiped out my primary hard drive and I couldn't get any data back. It was at that point, that I decided RAID wasn't for me.

Merlin said:
Any really important files or documents you can burn to BlueRay or DVD

Blu-ray is still far too expensive, and who knows where the format will be in five years? DVD is also a poor choice as far as I'm concerned, because each disc holds roughly 4GB... while a $100 hard drive holds 1,000GB. It's kind of hard to justify not purchasing another hard drive.

Psi* said:
As far as 1) is concerned. My experience with the drives I had was absolutely no change in HD performance. I beat to death several HDD benchmarks /w & w/o RAID. Faster drives & buffers ... big sophisticated buffers on the drives as well as OS cache, I think.

You said you ran a RAID 1+0? If so, I'm kind of impressed that there was no noticeable speed gains, especially in synthetic benchmarks. The reason a lot of people go RAID 0 is for that very reason alone (well, obviously). I wonder if for some reason synthetic benchmarks can't properly push a RAID 1+0, but can a RAID 0, simply because there's less drives to deal with?
 

Kougar

Techgage Staff
Staff member
R1) The whole mechanical thing with HDDs was much slower then. Parallel a pile of them & have significant improvement. As HDDs access time has improved the cross over of number of HDDs striped, reduced so that this was not such an advantage.

Improvement in read/write performance, yes. But RAIDing drives does NOT actually decrease the ~9ms (12ms, whichever) access latency, because every single drive in the array has the same ~9ms access latency. It might seem counter-intuitive, but a single X25-E can get to and read data before an 8- 15,000RPM SAS RAID array is able to access it. Such as in these enterprise benchmarks

2) The other part was cheap drives with various kinds of proactive preventative maintenance measures. If for nothing else, then just hot swapping drives after a certain run time.

I'm not entirely sure what you are getting at, but hot swapping doesn't require RAID, just AHCI to be enabled. There isn't any benefit to doing so.

Spinning up is the point where the most wear and tear is made on the drive motor. Once the platters are spinning at speed, inertia will ensure very little needs to be done by the motor to keep them spinning from that point. It's one reason I disable HDD spin down on all of my machines, especially the NAS, and the longevity of my drives would seem to back me up on this.

As far as 1) is concerned. My experience with the drives I had was absolutely no change in HD performance. I beat to death several HDD benchmarks /w & w/o RAID. Faster drives & buffers ... big sophisticated buffers on the drives as well as OS cache, I think.
+
You said you ran a RAID 1+0? If so, I'm kind of impressed that there was no noticeable speed gains, especially in synthetic benchmarks. The reason a lot of people go RAID 0 is for that very reason alone (well, obviously). I wonder if for some reason synthetic benchmarks can't properly push a RAID 1+0, but can a RAID 0, simply because there's less drives to deal with?

RAID 1+0 is usually done using four drives to gain the best possible performance, but considering that in effect this only uses two drives in RAID 0, it's not a huge performance increase. Using only two (or even three) drives means you won't get full RAID 0 performance, because the same drives must write parity data at the same time as writing the user data.

I used a RAID 10 (eg 1+0) myself once Vista launched because it was more RAID friendly than XP... I saw some half-decent read/write boosts but it wasn't anything major.

And 2), I knew that I was never going to just pull a perfectly good drive to swap it with another. I thought that I would have gotten some warning. NOT! I guess part of that was that the drives I used were not really that cheap & they do have a 3 or 5 year warranty. It has been about 3 years, but I'm not going to mess with it.

Again, I think it's silly to do so. If a drive fails on its own the RAID will give you enough time to replace it under normal circumstances, and many of my drives are at or beyond the 5 year mark on their age. The "power on hours" count for many of my drives are in the 2-3 years range and climbing... ;)

FWIW The system is in Florida. It was on a UPS. Florida gets plenty of thunder storms so a UPS is an obvious thing. The computer is not heavily used tho it was on 24x7. One night power went out, the UPS did allow the system to shut off politely. But, upon powering up we had a drive failure ... completely failed & hosed all of the movies & games ... er data.:eek:

Despite the RAID array? I've personally experienced RAID 10 and RAID 0 inexplicably fail for various reasons, but never from an actually bad drive. Overclocking errors killed the RAID 0 array so that was my fault at least. But on another occasion even with a 100% stock system I was constantly seeing errors being written to a replacement RAID 10 array, which I had just built to replace the first RAID 10 array that failed. Its scary because Intel's built-in RAID never tells you errors are being constantly found on the RAID array, and once the errors occur in the wrong spot, the computer sees the array as failed regardless. It was incapable of rebuilding a four disk failed RAID 10 array.

This is actually why many power users seem to recommend physical hardware RAID setups, but I do not know if that would of made any difference or not to be honest.

With fast drives & various was of accomplishing automatic backups I see little point to RAID other than commercial servers and that is a whole different case and qualifies for the 2 conditions I cite.

With a pure RAID 1 array if a drive failed, you should be able to pull it out, and reboot the system using the remaining drive as if nothing had happened. RAID was always about data redundancy, but there isn't much reason beyond that these days. A single SSD would blow any RAID away out of the water as far as user experience went.
 
Last edited:

Psi*

Tech Monkey
First, I said nothing about latency or spin up. Rob's article reference 1.5 TB drives and not puny:rolleyes: 32 GB drives.

Second, I would hope a low capacity $400 (SSD) drive out performs a TB $100 drive and it does initially. BUT SSDs slow down with use ... just a minor little point. Additionally, there is something about the term "wear leveling" used relative to SSDs that increases my reluctance with parting with $400 :eek:

Given that, just the concept of a RAID of SSDs is an oxymoron. RAID is an acronym; redundant array of inexpensive disks. Just because you can do something, does not mean that it is a good idea or has relevance. The article is interesting, 'tho I suspect that the RAM drives of the past with a reasonable UPS are faster and much less expensive; if the argument is for brute force drive speed.

What would be interesting from that article is more analysis about drive channel capacities and saturation. That is the underlying technical story.

Hot swappable drives were developed chiefly for RAID pro-active maintenance ... swap out a drive before it fails. Not such a big deal for home & perhaps no application there, but for business' that must be up 24x7 it is important. Banking & money movers are quite fanatic about this. The cost of labor during business hours is less than calling someone out in the wee hours. RAID does not need it but system maintenance does.

My RAID was off an ASUS motherboard. Motherboard RAID is substandard relative to what Adaptec controllers can offer which is what your article mentions. Adaptec is fine hardware. A couple of Adaptec controllers (or a competitor's mention in the article) are probably the only ways to satisfactorily implement RAID. Software implementation has always been the worse 'tho your article suggests otherwise ... and we would be doing what else on that machine?:confused:

Either way with separate controllers, off a mother board, or via software, rebuilding a 1TB drive would be painfully slow as indicated in the original article mention by Rob. A 200 hour rebuild for 1.5 TB drive size is ludicrous for anyone. I have a few TB drives with large partitions & scheduled defragging for when I am not around is my friend.

For serious home use I see no point in RAID. RAID still only makes sense for random high speed access of huge files by simultaneous users ... Blu-Ray movies approach 50 GB and even with some kind of interactive video (I haven't heard of yet) we lack simultaneous users ... so ehhhh. :p

Yes, drives can be swapped out in the event of failure so there is comfort in a kind of backup. Unfortunately, in my case, that exact same drive could not be found when I needed it. (And, I did have 4 drives in the array as it turns out.) I did later but that did not do me any good. So the longer drives are in an array the greater the likelihood of catastrophic failure because there no replacement parts ... as in a part!!

Back to the OP's point, RAID with 1 TB drives can be done but it is impractical. And RAID does not automatically provide file version backup so a backup plan is necessary anyway.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Kougar said:
Improvement in read/write performance, yes. But RAIDing drives does NOT actually decrease the ~9ms (12ms, whichever) access latency, because every single drive in the array has the same ~9ms access latency. It might seem counter-intuitive, but a single X25-E can get to and read data before an 8- 15,000RPM SAS RAID array is able to access it.

That's true, but wouldn't latencies naturally decrease when bringing multi-tasking into things? Of course, it would assume that twice the amount of data could be grabbed at once, but I think it could be assumed that latencies would be a little but lower on a RAID 0 setup. If I'm wrong, then so be it.

Psi* said:
Given that, just the concept of a RAID of SSDs is an oxymoron. RAID is an acronym; redundant array of inexpensive disks.

Fine, let's change the I to a small L and call it Redundant Array of Luxurious Disks. ;)
 

Psi*

Tech Monkey
I wonder if a replacement drive must be identical? What if it is identical in all aspects but has more capacity?
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
I wonder if a replacement drive must be identical? What if it is identical in all aspects but has more capacity?

In that case, if anything did happen, I'd assume the drive would be formatted for a lower capacity, and just render the extra space gone. When I found out our server drive died, our host told us we needed an identical drive, so perhaps it does have to be identical. Kougar would know the answer to that one better than anyone, though.
 

Kougar

Techgage Staff
Staff member
Well, I'm not sure how RAIDing SSD's got into this, I was comparing a single SSD to a RAID setup. As ya say, there isn't much of a performance gain for a home user and that's why a single SSD would be the better route. As far as hot plugging goes, my comments were directed specifically at the home user. :)

Using drives with TRIM will pretty much fix any slowing down issues as an SSD is filled with data... assuming the SSD firmware supports TRIM and the host OS also supports TRIM (and has it enabled).

SSD longevity is a huge concern, even I have plenty of questions there. However, if a regular HDD only lasts three years for either one of you, then I'm sure a good quality SSD would easily beat that. ;)

My hard drives are closer to five years old if not even older, and I have much less confidence about an SSD lasting five years... it depends on to many factors. The larger the drive, the more cells there are to spread the "wear leveling" out on, but if the drive is kept mostly full then capacity really doesn't matter any. It's a function of drive capacity and how much of it is used, along with the specific drive controller and how it implements wear leveling.

That's true, but wouldn't latencies naturally decrease when bringing multi-tasking into things? Of course, it would assume that twice the amount of data could be grabbed at once, but I think it could be assumed that latencies would be a little but lower on a RAID 0 setup. If I'm wrong, then so be it.

Ah, but what if you were accessing (or creating) a 4KB logfile. You could have 100 hard drives in RAID 0, but if every single hard drive in this array requires 4 milliseconds to seek across the disk surface, then 4ms would be the quickest you will get that file. (4ms is better than a Velociraptor... Versus 0.065ms for Intel's Gen2 SSD).

Now, if you start stacking multiple read and/or write commands, after the initial 4ms you will have multiple drives accessing multiple parts of the data at once, that's the gain from RAID you're thinking of here.

I wonder if a replacement drive must be identical? What if it is identical in all aspects but has more capacity?

Not anymore, but this used to be the case. A good RAID controller will work with any drive with similar performance characteristics that matches the capacity of the other drives in the array... but for performance reasons its best to use as similar a drive as possible.

If another drive wasn't available then imaging the drive onto a non-RAID backup would work in the interim until new drives could be found. I've used Seagate 320GB drives in all of my RAID configurations, and they still sell those even three years later (way overpriced though, today!).

Some of my Seagate 7200.10 320GB's don't even have the same PCB and controller on them, despite being bought from Newegg in a single order. The controller chip is a different maker and the firmware was newer on some of them, so don't always expect to find truly identical drives even if you buy them together at once. :p (Two were made in China, two were made in Singapore.)
 
Last edited:
Top