|Sent on:||Friday, November 9, 2012 1:24 PM|
Hey Bob, I think you're thinking of RAID3. RAID5 doesn't have any specific parity volume. A RAID5 rebuild reads all the stripes on the working disks and results in continuous writes only on the rebuild drive. If a RAID10 set has a one disk failure, swapping a replacement in will still result in the replacement drive undergoing continuous writes. So... (?) ______________ Calvin Chu Senior Technology Licensing Officer, Columbia Technology Ventures [address removed] (212)[masked] (voice) (212)[masked] (fax) @cchu (twitter) techventures.columbia.edu Subscribe to our mailing list Interact with our office: Twitter | Facebook | LinkedIn -----Original Message----- From: Bob Gezelter [mailto:[address removed]] Sent: Friday, November 09,[masked]:41 PM To: [address removed] Cc: Laird Popkin; Calvin Chu; [address removed] Subject: RE: [newtech-1] SSD based NAS solution Laird, Calvin, and Paul, It is well-known that RAID5, particularly rebuilds, can result in drive failures. If one considers MTBF as a function of number of operations (rather than number of "running hours") this should not be surprising. RAID5 uses an extra volume as for parity. Thus, each and every time that one of the other volumes is written to, the parity volume must be updated. This results in a very high usage rate for the parity disk, and the resulting wear and tear leading to failure. When it was first proposed, RAID5 made more economic sense. Today, in many situations, it is far better to use mirroring configurations (so-called "RAID 0+1") to achieve performance and redundancy in many configurations. - Bob Gezelter, http://www.rlgsc.... > -------- Original Message -------- > Subject: Re: [newtech-1] SSD based NAS solution > From: Laird Popkin <[address removed]> > Date: Fri, November 09,[masked]:45 am > To: [address removed] > > > I'll second this - I've done a lot of raid set rebuilds, and had a few > double failures. > > So for critical systems (databases, etc.), running with enough > redundancy to survive two drive failures is a great idea. My guess > (and I was in the hard drive business for a few years) is that the > rebuild process puts a high load of stress on the rest of the drives, > making it more likely that another drive fails during the rebuild. On > top of that, if all of the drives are the same model and age they are > more likely to fail for the same reasons at around the same time, if there's > a mechanical cause of failure. > Even if that's not the case, rebuild times for large drive arrays > built out of big, slow disks can be *days* during which you're running > with no redundancy, which is dangerous. > > Drobo supports that as an option, which they call "Dual Disk Redundancy". > If you have one of their larger units (e.g. the 8 bay Drobo Pro) > you're only spending two drives to protect 6, which seems like a > pretty reasonable overhead to secure your data. In a smaller, four > drive unit, you'd be using two drives to protect two drives, which > would work, technically, but seems inefficient since you're wasting a lot of > disk space. > > On Thu, Nov 8, 2012 at 5:18 PM, paul <[address removed]> wrote: > > > Calvin, > > > > How many RIAD5 systems do you operate? In my experience with dozens > > of RAID5, 6 and 10 subsystems, when a drive fails on a subsystem > > there's a great probability of a second drive failure during the > > rebuild process. I have experienced such failures multiple times in > > the last few years. Have you considered RAID6 or 10 to lower the > > risk of data loss in a worst case situation. > > > > Paul Yurt > > Inventor / Systems Engineer / Media Technologist [address removed] > > 310~[masked] > > > > > -- > > *Laird Popkin | Executive Director, Architecture* > > *KAPLAN TECHNOLOGY* > > **** > > Kaplan, Inc. | 6301 Kaplan University Avenue, Fort Lauderdale, FL > 33309** > > tel [masked] | email [address removed]
This email message originally included an attachment.