Jonathan Edwards wrote:
Here's 10 options I can think of to summarize combinations of zfs with hw redundancy:

#   ZFS     ARRAY HW        CAPACITY    COMMENTS
--  ---     --------        --------    --------
1   R0      R1              N/2         hw mirror - no zfs healing (XXX)
2   R0      R5              N-1         hw R5 - no zfs healing (XXX)
3   R1      2 x R0          N/2         flexible, redundant, good perf
4   R1      2 x R5          (N/2)-1     flexible, more redundant, decent perf
5   R1      1 x R5          (N-1)/2     parity and mirror on same drives (XXX)
6   RZ      R0              N-1         standard RAIDZ - no array RAID (XXX)
7   RZ      R1 (tray)       (N/2)-1     RAIDZ+1
8   RZ      R1 (drives)     (N/2)-1     RAID1+Z (highest redundancy)
9   RZ      2 x R5          N-3         triple parity calculations (XXX)
10  RZ      1 x R5          N-2         double parity calculations (XXX)

If you've invested in a RAID controller on an array, you might as well take advantage of it, otherwise you could probably get an old D1000 chassis somewhere and just run RAIDZ on JBOD.

I think it would be good if RAIDoptimizer could be expanded to show these
cases, too.  Right now, the availability and performance models are simple.
To go to this level, the models get more complex and there are many more
tunables.  However, for a few representative cases, it might make sense to
do deep analysis, even if that analysis does not get translated into a
tool directly.  We have the tools to do the deep analysis, but the models
will need to be written and verified.  That said, does anyone want to see
this sort of analysis?  If so, what configurations should we do first (keep
in mind that each config may take a few hours, maybe more depending on the
performance model)
 -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to