From our front-page news:
We've gone over the importance of keeping your data safe many times over the life of the site, but one method of doing so, RAID, we haven't talked about too much. RAID, in layman's terms, is the process of taking more than one drive and making your data redundant, so that if a drive fails, you can easily get your machine back up and running. RAID can also be used for performance, or both.
For those who use RAID for redundancy, though, a writer at the Enterprise Storage Forum questions the future of the tech, and states that it could be on its way out. Back when hard drives were measured mostly in gigabytes rather than terabytes, rebuilding a RAID array took very little time. But because densities have been constantly growing, and speed hasn't been at the same rate, rebuilding an array on a 1TB+ takes far longer than anyone would like... upwards of 20 hours.
What's the solution? To get rid of RAID and look for easier and quicker backup schemes, or something entirely different? The writer of the article offers one idea... to have a RAID solution that doesn't rebuild the entire array, but rather only the data used. Still, that would take a while if your drive is loaded to the brim with data. Another option is an improved file system. Many recommend ZFS, but that's only used in specific environments, certainly not on Windows.
This question also raises the issues with Microsoft's NTFS, and we have to wonder if now would be a great time for the company to finally update their file system (chances are they are). It's not a bad FS by any means, but with this RAID issue here, and SSDs preparing to hit the mainstream, it seems that we're in need of an updated and forward-thinking FS. My question is, if you use RAID, what's your thoughts on it? Do you still plan to use it in the future, despite the large rebuild times?
What this means for you is that even for enterprise FC/SAS drives, the density is increasing faster than the hard error rate. This is especially true for enterprise SATA, where the density increased by a factor of about 375 over the last 15 years while the hard error rate improved only 10 times. This affects the RAID group, making it less reliable given the higher probability of hitting the hard error rate during a rebuild.
Source: Enterprise Storage Forum
For those who use RAID for redundancy, though, a writer at the Enterprise Storage Forum questions the future of the tech, and states that it could be on its way out. Back when hard drives were measured mostly in gigabytes rather than terabytes, rebuilding a RAID array took very little time. But because densities have been constantly growing, and speed hasn't been at the same rate, rebuilding an array on a 1TB+ takes far longer than anyone would like... upwards of 20 hours.
What's the solution? To get rid of RAID and look for easier and quicker backup schemes, or something entirely different? The writer of the article offers one idea... to have a RAID solution that doesn't rebuild the entire array, but rather only the data used. Still, that would take a while if your drive is loaded to the brim with data. Another option is an improved file system. Many recommend ZFS, but that's only used in specific environments, certainly not on Windows.
This question also raises the issues with Microsoft's NTFS, and we have to wonder if now would be a great time for the company to finally update their file system (chances are they are). It's not a bad FS by any means, but with this RAID issue here, and SSDs preparing to hit the mainstream, it seems that we're in need of an updated and forward-thinking FS. My question is, if you use RAID, what's your thoughts on it? Do you still plan to use it in the future, despite the large rebuild times?
What this means for you is that even for enterprise FC/SAS drives, the density is increasing faster than the hard error rate. This is especially true for enterprise SATA, where the density increased by a factor of about 375 over the last 15 years while the hard error rate improved only 10 times. This affects the RAID group, making it less reliable given the higher probability of hitting the hard error rate during a rebuild.
Source: Enterprise Storage Forum