On 08/23/10 10:38 AM, Richard Elling wrote:
On Aug 21, 2010, at 9:22 PM, devsk wrote:

If dedup is ON and the pool develops a corruption in a file, I can never fix it 
because when I try to copy the correct file on top of the corrupt file,
the block hash will match with the existing blocks and only reference count 
will be updated. The only way to fix it is to delete all
snapshots (to remove all references) and then delete the file and then copy the 
valid file. This is a pretty high cost if it is so (empirical
evidence so far, I don't know internal details).

Has anyone else experienced this?
zfs set dedup=on,verify dataset

IMNSHO, verify should be the default.

I thought it was the default for "lesser" checksum algorithms, give the long odds in an sha256 false positive?

--
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to