On Aug 22, 2010, at 3:57 PM, Ian Collins wrote:
> On 08/23/10 10:38 AM, Richard Elling wrote:
>> On Aug 21, 2010, at 9:22 PM, devsk wrote:
>>   
>>> If dedup is ON and the pool develops a corruption in a file, I can never 
>>> fix it because when I try to copy the correct file on top of the corrupt 
>>> file,
>>> the block hash will match with the existing blocks and only reference count 
>>> will be updated. The only way to fix it is to delete all
>>> snapshots (to remove all references) and then delete the file and then copy 
>>> the valid file. This is a pretty high cost if it is so (empirical
>>> evidence so far, I don't know internal details).
>>> 
>>> Has anyone else experienced this?
>>>     
>> zfs set dedup=on,verify dataset
>> 
>> IMNSHO, verify should be the default.
>>   
> 
> I thought it was the default for "lesser" checksum algorithms, give the long 
> odds in an sha256 false positive?


That was the original intent, however the only checksum algorithm used today
is SHA-256.
 -- richard

-- 
OpenStorage Summit, October 25-27, San Fransisco
http://nexenta-summit2010.eventbrite.com
ZFS and performance consulting
http://www.RichardElling.com











_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to