On Sat, 7 Nov 2009, Dennis Clarke wrote:

Now the first test I did was to write 26^2 files [a-z][a-z].dat in 26^2
directories named [a-z][a-z] where each file is 64K of random
non-compressible data and then some english text.

What method did you use to produce this "random" data?

The dedupe ratio has climbed to 1.95x with all those unique files that are
less than %recordsize% bytes.

Perhaps there are other types of blocks besides user data blocks (e.g. metadata blocks) which become subject to deduplication? Presumably 'dedupratio' is based on a count of blocks rather than percentage of total data.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to