Albert Chin said:
> Well, ZFS with HW RAID makes sense in some cases. However, it seems that if
> you are unwilling to lose 50% disk space to RAID 10 or two mirrored HW RAID
> arrays, you either use RAID 0 on the array with ZFS RAIDZ/RAIDZ2 on top of
> that or a JBOD with ZFS RAIDZ/RAIDZ2 on top of that. 

I've been re-evaluating our local decision on this question (how to layout
ZFS on pre-existing RAID hardware).  In our case, the array does not allow
RAID-0 of any type, and we're unwilling to give up the expensive disk
space to a mirrored configuration.  In fact, in our last decision, we
came to the conclusion that we didn't want to layer RAID-Z on top of
HW RAID-5, thinking that the added loss of space is too high, given any
of the "XXX" layouts in Jonathan Edwards' chart:
> #   ZFS     ARRAY HW        CAPACITY    COMMENTS
> --  ---     --------        --------    --------
> . . .
> 5   R1      1 x R5          (N-1)/2     parity and mirror on same drives (XXX)
> 9   RZ      3 x R5          N-4         triple parity calculations (XXX)
> . . .
> 10  RZ      1 x R5          N-2         double parity calculations (XXX)


So, we ended up (some months ago) deciding to go with only HW RAID-5,
using ZFS to stripe together large-ish LUN's made up of independent HW
RAID-5 groups.  We'd have no ZFS redundancy, but at least ZFS would catch
any corruption that may come along.  We can restore individual corrupted
files from tape backups (which we're already doing anyway), if necessary.

However, given the default behavior of ZFS (as of Solaris-10U3) is to
panic/halt when it encounters a corrupted block that it can't repair,
I'm re-thinking our options, weighing against the possibility of a
significant downtime caused by a single-block corruption.

Today I've been pondering a variant of #10 above, the variation being
to slice a RAID-5 volume across than N LUN's, i.e. LUN's smaller than the
size of the individual disks that make up the HW R5 volume.  A larger
number of small LUN's results in less space given up to ZFS parity, which
is nice when overall disk space is important to us.

We're not expecting RAID-Z across these LUN's to make it possible to
survive failure of a whole disk, rather we only "need" RAID-Z to repair
the occasional block corruption, in the hopes that this might head off the
need to restore a whole multi-TB pool.  We'll rely on the HW RAID-5 to
protect against whole-disk failure.

Just thinking out loud here.  Now I'm off to see what kind of performance
cost there is, comparing (with 400GB disks):
        Simple ZFS stripe on one 2198GB LUN from a 6+1 HW RAID5 volume
        8+1 RAID-Z on 9 244.2GB LUN's from a 6+1 HW RAID5 volume

Regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to