I'm betting you have snapshots of the "fragmented" filesystem you don't know about. Fragmentation won't reduce the amount of usable space in the pool. Also, unless you used the '--in-place' option for rsync, rsync won't cause much fragmentation, as it copies the entire file during the rsync.

do this:  'zfs list -r -t all zpool1/vmwaresync'

and see what output you get.

-Erik



Holger Isenberg wrote:
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS 
Version 10?

What except zfs send/receive can be done to free the fragmented space?

One ZFS was used for some month to store some large disk images (each 50GByte 
large) which are copied there with rsync. This ZFS then reports 6.39TByte usage 
with zfs list and only 2TByte usage with du.

The other ZFS was used for similar sized disk images, this time copied via NFS 
as whole files. On this ZFS du and zfs report exactly the same usage of 
3.7TByte.

bash-3.00# zfs list -r zpool1/vmwarersync
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zpool1/vmwarersync  6.39T   985G  6.39T  /export/archiv/VMs/rsync

bash-3.00# du -hs /export/archiv/VMs/rsync
 2.0T   /export/archiv/VMs/rsync

bash-3.00# zfs list -r zpool1/vmwarevcb
NAME               USED  AVAIL  REFER  MOUNTPOINT
zpool1/vmwarevcb  3.75T   985G  3.75T  /export/archiv/VMs/vcb

bash-3.00# du -hs /export/archiv/VMs/vcb
 3.7T   /export/archiv/VMs/vcb

bash-3.00# zpool upgrade
This system is currently running ZFS pool version 10.

bash-3.00# zpool status zpool1
  pool: zpool1
 state: ONLINE
 scrub: scrub completed after 14h2m with 0 errors on Thu Mar  4 10:22:47 2010
config:

bash-3.00# zpool list zpool1
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zpool1  20.8T  19.3T  1.53T    92%  ONLINE  -


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to