On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote:
> Does the dedupe functionality happen at the file level or a lower block
> level?

it occurs at the block allocation level.

> I am writing a large number of files that have the fol structure :
> 
> ------ file begins
> 1024 lines of random ASCII chars 64 chars long
> some tilde chars .. about 1000 of then
> some text ( english ) for 2K
> more text ( english ) for 700 bytes or so
> ------------------

ZFS's default block size is 128K and is controlled by the "recordsize"
filesystem property.  Unless you changed "recordsize", each of the files
above would be a single block distinct from the others.

you may or may not get better dedup ratios with a smaller recordsize
depending on how the common parts of the file line up with block
boundaries.

the cost of additional indirect blocks might overwhelm the savings from
deduping a small common piece of the file.

                                                - Bill

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to