Hi, > > In your case, there are two other aspects: > > - if you pool small devices as JBODS below a vdev > > member, no superordinate parity will help you when > > you loose a member of the underlying JBOD. The > whole > > pool will just be broken, and you will loose a > good > > part of your data. > > No, that's not correct. The first option of pooling > smaller disks into larger, logical devices via SVM > would allow me to theoretically lose up to > [b]eight[/b] disks while still having a live zpool > (in the case where I lose 2 logical devices comprised > of four 500GB drives each; this would only kill two > actual RAIDZ2 members).
You are right. I was wrong with the JBOD observation. In the worst case the array still can't tolerate more than 2 disk failures, if all disk failures are across different 2 TB building blocks. > Using slices, I'd be able to lose up to [b]five[/b] > disks (in the case where I'd lose one 2TB disk > (affecting all four vdevs) and four 500GB disks, one > from each vdev). As a single 2 TB disk is causing a failure in each group for scenario 2, the worst case here is as well "3 disks and you are out". This circumstance reduces the options to play with grouping to not less than 4 groups with that setup. The payload for redundancy in both scenarios is 4 TB, consequently. (With no hot spare) Doesn't that all point at option 1 as the better choice, as the performance will be much better, obviously when slicing the 2 TB drives will leave you at basically un-cached IO for these members, dominating the rest of the array? One more thing with SVM is unclear for me: if one of the smaller disks goes, from zfs perspective the whole JBOD has to be resilvered. But what will be the interactions between fixing the jbod in SVM and re-silvering in ZFS? Regards, Tonmaus -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss