On Fri, Jan 7, 2011 at 11:33 AM, Robert Milkowski <mi...@task.gda.pl> wrote: > end-up with the block A. Now if B is relatively common in your data set you > have a relatively big impact on many files because of one corrupted block > (additionally from a fs point of view this is a silent data corruption). > Without dedup if you get a single block corrupted silently an impact usually > will be relatively limited.
A pool can be configures so that a dedup'd block will only be referenced a certain number of times. So if you write out 10,000 identical blocks, it may be written 10 times with each duplicate referenced 1,000 times. The exact number is controlled by the dedupditto property for your pool, and you should set it as your risk tolerance allows. -B -- Brandon High : bh...@freaks.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss