It’s 2008 and one would think that disk-based storage systems are beyond the point of catastrophic outages and/or data loss as a result of disk drive failures. The prevalent use of RAID in storage systems for disk drive protection in its many forms would seem like ample insurance against the loss of data. However a careful examination of the facts exposes the flaws in assuming that RAID alone is sufficient as a means of data protection; especially when used in conjunction with today’s high capacity SATA disk drives.
A study published in 2007 by Bianca Schroeder and Garth A. Gibson of Carnegie Mellon‘s Computer Science Department shows that the actual mean time to failure (MTTF) for SATA drives in the field is significantly higher than either SCSI or FC drives with as many as 4% of SATA drives failing in some production storage systems. However what this study does not examine is the risk associated with recovering data when these disk drive failures do occur.
Much ado is made of the possibility of a second drive failing as well as the time it takes to recover a SATA disk drive with a capacity of 500 GB or greater. However another risk that receives much less attention is the non-recoverable bit error rate of SATA drives that neither RAID nor more spare disk drives in storage systems can correct.
SATA disk drive manufacturers publish on their SATA disk drive specification sheet how often these bit error rates (BERs) will occur. Though these BERs may vary by vendor, it does not take very long to uncover that some vendors’ SATA disk drives experience a non-recoverable BER as frequently as once for every 10 – 12 TBs of data (though it is stated on data sheets in terms like “1 per 10^14″). These errors result in a complete loss of the block of data where that bit error occurs when and if reconstruction of that block occurs, whether or not RAID is used to protect it. This error rate is deemed so high that a team of researchers at Microsoft used the term “frightening” in a December 2005 technical report to describe this possibility of data loss.
But why doesn’t RAID protect against these unrecoverable bit error rates? RAID is designed to protect against the failure of individual disk drives, not individual bits of data. So when a bit of data becomes corrupted, RAID offers no means to recover from it because it is not checking for faults at that level as RAID “assumes” the data is protected. This flaw is exposed in RAID systems since, should a disk drive fail in a RAID group, the data on the new drive is reconstructed from existing data.
If, as existing data is read and then copied to a new drive, the controller cannot read the data, the copy of that bit of data to the replacement disk drive can never complete and data loss occurs. The odds of this occurring increase as the capacity of SATA disk drives also increase. As 1 TB SATA drives become more readily available and companies configure RAID sets as 7+1 (7 disk drives in an array group plus 1 for parity), the possibility of data loss assuming the loss of one drive reaches about 1 in 10 (based on manufacturers specifications).
So let’s put this possibility of data loss in perspective and when it matters. The loss of 1 bit out of every 100 trillion bits sounds pretty insignificant (and frankly, it is). This forces companies to prioritize when losing a bit of data matters, and how important the loss of an individual, seemingly insignificant amount bit of data is to their business operations.
Two examples that immediately come to mind: deduplication and archived data.
Deduplication puts a premium on each bit of data since a bit of data is only stored once but may be used in the reconstruction of tens, hundreds or even thousands of files during restores. As a result, loosing a single bit of data can have catastrophic consequences from a recovery standpoint. If just one bit happens to be corrupt, it may impact more than just the recovery of an individual file but multiple files depending on the affected bit.
The loss of a single bit of data in archived data can not only can preclude companies from recovering multiple files that share the same bit of data, it also brings into question whether or not the archived data will be available when a company needs it. Archived data may be stored for years or even decades and copied many times over that period of time. During that time, a company may store hundreds of terabytes or even petabytes of information so the possibility of data loss goes from a remote possibility to almost a certainty as Microsoft’s researchers encountered.
All of this goes to prove the point that companies need a new architecture that goes beyond the RAID technology that is found in today’s disk-based storage system when storing this amount of information for the timeframes that are typical for archiving. RAID protection, and even making multiple copies of the same data, is no longer a guarantee that the individual bits of data that comprise a file are adequately protected or preserved.
In these circumstances, new disk-based storage systems such as Permabit’s Enterprise Archive and its RAIN-EC architecture take steps to mitigate the possibility of the loss of individual bits of data so they are preserved and protected long term. In an upcoming blog entry, I’ll take a closer look at how RAIN-EC addresses this issue in Permabit’s Enterprise Archive solution.