>On Fri, January 7, 2011 01:42, Michael DeMan wrote:
>> Then - there is the other side of things.  The 'black swan' event.  At
>> some point, given percentages on a scenario like the example case above,
>> one simply has to make the business justification case internally at their
>> own company about whether to go SHA-256 only or Fletcher+Verification?
>> Add Murphy's Law to the 'black swan event' and of course the only data
>> that is lost is that .01% of your data that is the most critical?
>
>The other thing to note is that by default (with de-dupe disabled), ZFS
>uses Fletcher checksums to prevent data corruption. Add also the fact all
>other file systems don't have any checksums, and simply rely on the fact
>that disks have a bit error rate of (at best) 10^-16.
>
>Given the above: most people are content enough to trust Fletcher to not
>have data corruption, but are worried about SHA-256 giving 'data
>corruption' when it comes de-dupe? The entire rest of the computing world
>is content to live with 10^-15 (for SAS disks), and yet one wouldn't be
>prepared to have 10^-30 (or better) for dedupe?


I would; we're not talking about flipping bits the OS comparing data
using just the checksums and replacing one set with another.

You might want to create a file to show how weak fletcher really is
but two such files wouldn't be properly stored on a de-dup zpool 
unless you verify.




Casper

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to