<huge forwards on how bad SANs really are for data integrity removed>


The answer is:   insufficient data.


With modern journalling filesystems, I've never had to fsck anything or
run a filesystem repair. Ever.  On any of my SAN stuff. 

The sole place I've run into filesystem corruption in the traditional
sense is with faulty hardware controllers; and, I'm not even sure ZFS
could recover from those situations, though less dire ones where the
controllers are merely emitting slightly wonky problems certainly would
be within ZFS's ability to fix, vice the inability of a SAN to determine
that the data was bad.


That said, the primary issue here is that nobody really has any idea
about silent corruption - that is, blocks which change value, but are
data, not filesystem-relevant. Bit flips and all.  Realistically, the
only way previous to ZFS to detect this was to do bit-wise comparisons
against backups, which becomes practically impossible on an active data
set.  

SAN/RAID equipment still has a very considerable place over JBODs in
most large-scale places, particularly in areas of configuration
flexibility, security, and management.  That said, I think we're arguing
at cross-purposes:   the real solution for most enterprise customers is
SAN + ZFS, not either just by itself.



-- 
Erik Trimble
Java System Support
Mailstop:  usca14-102
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to