2012-01-08 5:37, Richard Elling пишет:
The big question is whether they are worth the effort. Spares solve a serviceability problem and only impact availability in an indirect manner. For single-parity solutions, spares can make a big difference in MTTDL, but have almost no impact on MTTDL for double-parity solutions (eg. raidz2).
Well, regarding this part: in the presentation linked in my OP, the IBM presenter suggests that for a 6-disk raid10 (3 mirrors) with one spare drive, overall a 7-disk set, there are such options for "critical" hits to data redundancy when one of drives dies: 1) Traditional RAID - one full disk is a mirror of another full disk; 100% of a disk's size is "critical" and has to be prelicated into a spare drive ASAP; 2) Declustered RAID - all 7 disks are used for 2 unique data blocks from "original" setup and one spare block (I am not sure I described it well in words, his diagram shows it better); if a single disk dies, only 1/7 worth of disk size is critical (not redundant) and can be fixed faster. For their typical 47-disk sets of RAID-7-like redundancy, under 1% of data becomes critical when 3 disks die at once, which is (deemed) unlikely as is. Apparently, in the GPFS layout, MTTDL is much higher than in raid10+spare with all other stats being similar. I am not sure I'm ready (or qualified) to sit down and present the math right now - I just heard some ideas that I considered worth sharing and discussing ;) Thanks for the input, //Jim _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss