Richard L. Hamilton writes:
 > Well, no; his quote did say "software or hardware".  The theory is apparently
 > that ZFS can do better at detecting (and with redundancy, correcting) errors
 > if it's dealing with raw hardware, or as nearly so as possible.  Most SANs
 > _can_ hand out raw LUNs as well as RAID LUNs, the folks that run them are
 > just not used to doing it.
 > 
 > Another issue that may come up with SANs and/or hardware RAID:
 > supposedly, storage systems with large non-volatile caches will tend to have
 > poor performance with ZFS, because ZFS issues cache flush commands as
 > part of committing every transaction group; this is worse if the filesystem
 > is also being used for NFS service.  Most such hardware can be
 > configured to ignore cache flushing commands, which is safe as long as
 > the cache is non-volatile.
 > 
 > The above is simply my understanding of what I've read; I could be way off
 > base, of course.
 >  

Sounds good to me. The first point is easy to understand. If
you  rely on ZFS for   data reconstruction; carve virtual
luns out of your storage and  mirror those luns in ZFS, then it's
possible that  both  copies of  a mirrored block  end up  on a
single physical device.

Performance wise, the  ZFS  I/O scheduler might interact  in
interesting way with  the one in   the storage, but I  don't
know if this has been studied in depth.

-r


 >  
 > This message posted from opensolaris.org
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to