On Tue, Oct 04, 2011 at 09:28:36PM -0700, Richard Elling wrote: > On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote: > > > I sent it twice, because something strange happened on the first send, > > to the ashift=12 pool. "zfs list -o space" showed figures at least > > twice those on the source, maybe roughly 2.5 times. > > Can you share the output?
Source machine, zpool v14 snv_111b: NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD VOLSIZE int/iscsi_01 99.2G 237G 37.9G 199G 0 0 200G Destination machine, zpool v31 snv_151b: NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD VOLSIZE geek/iscsi_01 3.64T 550G 88.4G 461G 0 0 200G uext/iscsi_01 1.73T 245G 39.2G 206G 0 0 200G geek is the ashift=12 pool, obviously. I'm assuming the smaller difference for uext is due to other layout differences in the pool versions. > > What is going on? Is there really that much metadata overhead? How > > many metadata blocks are needed for each 8k vol block, and are they > > each really only holding 512 bytes of metadata in a 4k allocation? > > Can they not be packed appropriately for the ashift? > > Doesn't matter how small metadata compresses, the minimum size you can write > is 4KB. This isn't about whether the metadata compresses, this is about whether ZFS is smart enough to use all the space in a 4k block for metadata, rather than assuming it can fit at best 512 bytes, regardless of ashift. By packing, I meant packing them full rather than leaving them mostly empty and wasted (or anything to do with compression). > I think we'd need to see the exact layout of the internal data. This can be > achieved with the zfs_blkstats macro in mdb. Perhaps we can take this offline > and report back? Happy to - what other details / output would you like? -- Dan.
pgpajJV4sBgdY.pgp
Description: PGP signature
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss