On 4/11/07, Marco van Lienen <[EMAIL PROTECTED]> wrote:

A colleague at work and I have followed the same steps, included
running a digest on the /test/file, on a SXCE:61 build today and
can confirm the exact same, and disturbing?, result.  My colleague
mentioned to me he has witnessed the same 'resilver' behavior on
builds 57 and 60.

Thank you for taking the time to confirm this.  Just as long as people
are aware of it, it shouldn't really cause much trouble.  Still, it
gave me quite a scare after replacing a bad disk.

I don't think these checksum errors are a good sign.
The sha1 digest on the file *does* show to be the same so the
question arises: is the resilver process truly broken (even though
in this test-case the test file does appear to unchanged based on
the sha1 digest) ?

ZFS still has good data, so this is not unexpected.  It is interesting
though that it managed to read all of the data without finding any bad
blocks.  I just tried this with a more complex directory structure,
and other variations, with the same result.  It is bizarre, but ZFS
only manages to use the good data in normal operation.

To see exactly what is damaged though, try the following instead.
After the resilver completes, zpool offline a known good device of the
RAID-Z.  Then, do a scrub or try to read the data.  Afterward, zpool
status -v will display a list of the damaged files, which is very
nice.

Chris
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to