> > At the moment, I'm hearing that using h/w raid under my zfs may be
> >better for some workloads and the h/w hot spare would be nice to
> >have across multiple raid groups, but the checksum capabilities in
> >zfs are basically nullified with single/multiple h/w lun's
> >resulting in "reduced protection."  Therefore, it sounds like I
> >should be strongly leaning towards not using the hardware raid in
> >external disk arrays and use them like a JBOD.

> The big reasons for continuing to use hw raid is speed, in some cases, 
> and heterogeneous environments where you can't farm out non-raid 
> protected LUNs and raid protected LUNs from the same storage array. In 
> some cases the array will require a raid protection setting, like the 
> 99x0, before you can even start farming out storage.

Just a data point -- I've had miserable luck with ZFS JBOD drives
failing.  They consistently wedge my machines (Ultra-45, E450, V880,
using SATA, SCSI drives) when one of the drives fails.  The system
recovers okay and without data loss after a reboot, but a total drive
failure (when a drive stops talking to the system) is not handled
well.

Therefore I would recommend a hardware raid for high-availability
applications.

Note, it's not clear that this is a ZFS problem.  I suspect it's a
solaris or hardware controller or driver problem, so this may not be
an issue if you find a controller that doesn't freak on a drive
failure.

BP 

-- 
[EMAIL PROTECTED]
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to