Torrey McMahon wrote:
[EMAIL PROTECTED] wrote:
I'll bet that ZFS will generate more calls about broken hardware
and fingers will be pointed at ZFS at first because it's the new
kid; it will be some time before people realize that the data was
rotting all along.
Ehhh....I don't think so. Most of our customers have HW arrays that
have been scrubbing data for years and years as well as apps on the
top that have been verifying the data. (Oracle for example.) Not to
mention there will be a bit of time before people move over to ZFS in
the high end.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ahh... but there is the rub. Today - you/we don't *really* know, do
we? Maybe there's bad juju blocks, maybe not. Running ZFS, whether in
a redundant vdev or not, will certainly turn the big spotlight on and
give us the data that checksums matched, or they didn't. And if we are
in redundant vdevs, hey - we'll fix it. If not, well we are certainly
no worse off then today's filesystems, but at least we'll know the bad
juju is there. How do the number of checksum mismatches compare across
different types/vendors/costs of storage subsystems? SLAs based on the
number of bad checksums? Price cut on storage that routinely gives back
bad checksummed data? Now, that is what will be interesting to me to
see....
ZFS, the DTrace of storage - no more guessing, just data.
/jason
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss