You have fallen into the same trap I fell into. df(1M) is not dedup aware; dedup occurs at the pool level, not the filesystem level. If you look at your df output, you can see your disk seems to be growing in size which is non-intuitive.
Once you start using ZFS and in particular dedup but also other data services such as snapshots, you need to start using the zpool and zfs reporting commands and abandon df(1M). /d 2009/11/27 Chavdar Ivanov <ci4...@gmail.com> > 2009/11/27 Thomas Maier-Komor <tho...@maier-komor.de>: > > Chavdar Ivanov schrieb: > >> Hi, > >> > >> I BFUd successfully snv_128 over snv_125: > >> > >> --- > >> # cat /etc/release > >> Solaris Express Community Edition snv_125 X86 > >> Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. > >> Use is subject to license terms. > >> Assembled 05 October 2009 > >> # uname -a > >> SunOS cheeky 5.11 snv_128 i86pc i386 i86pc > >> ... > >> > >> being impatient to test zfs dedup. I was able to set dedup=on (I presume > with the default sha256 key) on a few filesystems and did the following > trivial test (this is an edited script session): > >> > >> Script started on Wed Oct 28 09:38:38 2009 > >> # zfs get dedup rpool/export/home > >> NAME PROPERTY VALUE SOURCE > >> rpool/export/home dedup on local > >> # for i in 1 2 3 4 5 ; do mkdir /export/home/d${i} && df -k > /export/home/d${i} && zfs get used rpool/export/home && cp /testfile > /export/home/d${i}; done > >> Filesystem kbytes used avail capacity Mounted on > >> rpool/export/home 17418240 27 6063425 1% /export/home > >> NAME PROPERTY VALUE SOURCE > >> rpool/export/home used 27K - > >> Filesystem kbytes used avail capacity Mounted on > >> rpool/export/home 17515512 103523 6057381 2% /export/home > >> NAME PROPERTY VALUE SOURCE > >> rpool/export/home used 102M - > >> Filesystem kbytes used avail capacity Mounted on > >> rpool/export/home 17682840 271077 6056843 5% /export/home > >> NAME PROPERTY VALUE SOURCE > >> rpool/export/home used 268M - > >> Filesystem kbytes used avail capacity Mounted on > >> rpool/export/home 17852184 442345 6054919 7% /export/home > >> NAME PROPERTY VALUE SOURCE > >> rpool/export/home used 432M - > >> Filesystem kbytes used avail capacity Mounted on > >> rpool/export/home 17996580 587996 6053933 9% /export/home > >> NAME PROPERTY VALUE SOURCE > >> rpool/export/home used 574M - > >> # zfs get all rpool/export/home > >> NAME PROPERTY VALUE SOURCE > >> rpool/export/home type filesystem - > >> rpool/export/home creation Mon Sep 21 9:27 2009 - > >> rpool/export/home used 731M - > >> rpool/export/home available 5.77G - > >> rpool/export/home referenced 731M - > >> rpool/export/home compressratio 1.00x - > >> rpool/export/home mounted yes - > >> rpool/export/home quota none default > >> rpool/export/home reservation none default > >> rpool/export/home recordsize 128K default > >> rpool/export/home mountpoint /export/home > inherited from rpool/export > >> rpool/export/home sharenfs off default > >> rpool/export/home checksum on default > >> rpool/export/home compression off default > >> rpool/export/home atime on default > >> rpool/export/home devices on default > >> rpool/export/home exec on default > >> rpool/export/home setuid on default > >> rpool/export/home readonly off default > >> rpool/export/home zoned off default > >> rpool/export/home snapdir hidden default > >> rpool/export/home aclmode groupmask default > >> rpool/export/home aclinherit restricted default > >> rpool/export/home canmount on default > >> rpool/export/home shareiscsi off default > >> rpool/export/home xattr on default > >> rpool/export/home copies 1 default > >> rpool/export/home version 4 - > >> rpool/export/home utf8only off - > >> rpool/export/home normalization none - > >> rpool/export/home casesensitivity sensitive - > >> rpool/export/home vscan off default > >> rpool/export/home nbmand off default > >> rpool/export/home sharesmb off default > >> rpool/export/home refquota none default > >> rpool/export/home refreservation none default > >> rpool/export/home primarycache all default > >> rpool/export/home secondarycache all default > >> rpool/export/home usedbysnapshots 0 - > >> rpool/export/home usedbydataset 731M - > >> rpool/export/home usedbychildren 0 - > >> rpool/export/home usedbyrefreservation 0 - > >> rpool/export/home logbias latency default > >> rpool/export/home dedup on local > >> rpool/export/home mlslabel none default > >> # ls -l /export/home/d? > >> /export/home/d1: > >> total 299237 > >> -rw-r----- 1 root root 152993234 Oct 28 09:41 testfile > >> > >> /export/home/d2: > >> total 299237 > >> -rw-r----- 1 root root 152993234 Oct 28 09:41 testfile > >> > >> /export/home/d3: > >> total 299237 > >> -rw-r----- 1 root root 152993234 Oct 28 09:42 testfile > >> > >> /export/home/d4: > >> total 299237 > >> -rw-r----- 1 root root 152993234 Oct 28 09:42 testfile > >> > >> /export/home/d5: > >> total 299237 > >> -rw-r----- 1 root root 152993234 Oct 28 09:42 testfile > >> # sync > >> # sync > >> # zfs get dedup,used rpool/export/home > >> NAME PROPERTY VALUE SOURCE > >> rpool/export/home dedup on local > >> rpool/export/home used 731M - > >> # > >> script done on Wed Oct 28 09:44:16 2009 > >> > >> > >> By the look of it, nothing happens. Is there anything else to do to > enable the dedup, or I am completely misunderstanding the way it should be > working? > >> > >> Chavdar Ivanov > > > > Hi Chavdar, > > > > as far as I understood it, the dedup works during writing, and won't > > deduplicate already written data (this is planned for a later release). > > I understand the same; the last command of the for line in the example > above is actual copying of /testfile to five separate directories > within a ZFS which has dedup=on; as the deduplication is synchronous, > I thought I should be able to see on the fly the disk usage as > expressed by 'df -k' and the ZFS 'used' attribute stay the same. > > > > > - Thomas > > > Chavdar > > > > -- > ---- > Jonathan Swift - "May you live every day of your life." - > http://www.brainyquote.com/quotes/authors/j/jonathan_swift.html > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Dominic Kay +44 780 124 6099
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss