Luke Scharf wrote:
> Maurice Volaski wrote:
>   
>>> Perhaps providing the computations rather than the conclusions would
>>> be more persuasive  on a technical list ;>
>>>     
>>>       
>> 2 16-disk SATA arrays in RAID 5
>> 2 16-disk SATA arrays in RAID 6
>> 1 9-disk SATA array in RAID 5.
>>
>> 4 drive failures over 5 years. Of course, YMMV, especially if you 
>> drive drunk :-)
>>   
>>     
>
> My mileage does vary!
>
> On a 4 year old 84 disk array (with 12 RAID 5s), I replace one drive 
> every couple of weeks (on average).  This array lives in a proper 
> machine-room with good power and cooling.  The array stays active, though.
>
> -Luke
>   

I basically agree with this.  We have about 150TB in mostly RAID 5 
configurations, ranging from 8 to 16 disks per volume.  We also replace 
bad drives about every week or three, but in six years, have never lost 
an array.  I think our "secret" is this: on our 3ware controllers we run 
a verify at a minimum of three times a week.  The verify will read the 
whole array (data and parity), find bad blocks and move them if 
necessary to good media.  Because of this, we've never had a rebuild 
trigger a secondary failure.  knock wood.  Our server room has 
conditioned power and cooling as well.

Jon
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to