On 07/11/2012 12:24 PM, Justin Stringfellow wrote:
>> Suppose you find a weakness in a specific hash algorithm; you use this
>> to create hash collisions and now imagined you store the hash collisions 
>> in a zfs dataset with dedup enabled using the same hash algorithm.....
> 
> Sorry, but isn't this what dedup=verify solves? I don't see the problem here. 
> Maybe all that's needed is a comment in the manpage saying hash algorithms 
> aren't perfect.

It does solve it, but at a cost to normal operation. Every write gets
turned into a read. Assuming a big enough and reasonably busy dataset,
this leads to tremendous write amplification.

Cheers,
--
Saso
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to