>> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>> 
>>> Just in case this wasn't already clear.
>>> 
>>> After scrub sees read or checksum errors, zpool status -v will list
>>> filenames that are affected. At least in my experience.
>>> --
>>> - Tuomas
>> 
>> That didn't do it for me. I used scrub and afterwards zpool staus -v
>> didn't show any additional corrupted files, although there were the
>> same three files corrupted in a number of snapshots, which of course
>> zfs send detected when trying to actually send them.
> 
> Budy, we've been over this.
> 
> The behavior you experienced is explained by having corrupt data inside a
> hardware raid, and during the scrub you luckily read the good copy of
> redundant data.  During zfs send, you unluckily read the bad copy of
> redundant data.  This is a known problem as long as you use hardware raid.
> It's one of the big selling points, reasons for ZFS to exist.  You should
> always give ZFS JBOD devices to work on, so ZFS is able to scrub both of the
> redundant sides of the data, and when a checksum error occurs, ZFS is able
> to detect *and* correct it.  Don't use hardware raid.
> 

Edward - I am working on that! 

Although, I have to say that I do have exactly 3 files that are corrupt in each 
snapshot until I finally deleted them and restored them from their original 
source.

zfs send will abort when trying to send them, while scrub doesn't notice this.
If zfs send would have sent any of these snapshots successfully, or if any of 
my read attempts for these files would work one time and fail the other time, 
I'd agree.
I can't see how this behaviour could be explained, or better: what are the 
chances that only scrub gets the "clean" blocks from the h/w raids, while zfs 
send or cp always get the corrupted blocks!

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to