I think there are 2 potential issues here.

The  ZFS cache  or ARC manages   memory  for all  pools on a
system but the data is not really organized per pool.  So on
a pool export we don't  free up buffers associated with that
pool.  The memory is actually returned  to the system either
when pressure arises or on a modunload of the ZFS module; yep
that's a bit extreme.

So how about an rfe say:

        6424665: "ZFS/ARC should cleanup more after itself"

That would have helped your scenario.

But I see another point here is that the "swap -d "
failed to exert the require memory pressure on ZFS.
Sounds like another bug we'd need to track; 


-r

Daniel Rock writes:
 > Roch Bourbonnais - Performance Engineering schrieb:
 > > A already noted, this needs not be different from other FS
 > > but is still an interesting question. I'll touch 3 aspects
 > > here
 > > 
 > >    - reported freemem
 > >    - syscall writes to mmap pages
 > >    - application write throttling
 > > 
 > > 
 > > Reported  freemem will be lower   when running with ZFS than
 > > say UFS. The  UFS page cache  is considered as freemem.  ZFS
 > > will return it's 'cache' only when memory is needed.  So you
 > > will operate with lower   freemem but won't  actually suffer
 > > from this.
 > 
 > 
 > The RAM usage should be made more transparent to the administrator. Just 
 > today, after installing snv_37 on another machine, I couldn't disable swap 
 > because zfs has grabbed all free memory it could get and didn't release it 
 > (even after a "zfs export"):
 > 
 > # swap -l
 > swapfile             dev  swaplo blocks   free
 > /dev/md/dsk/d2      85,4       8 4193272 4193272
 > 
 > # swap -s
 > total: 275372k bytes allocated + 93876k reserved = 369248k used, 1899492k 
 > available
 > 
 > # swap -d /dev/md/dsk/d2
 > /dev/md/dsk/d2: Not enough space
 > 
 > # kstat | grep pp_kernel
 >          pp_kernel                       872514
 > 
 > # prtconf | head -2
 > System Configuration:  Sun Microsystems  i86pc
 > Memory size: 4095 Megabytes
 > 
 > # zpool export pool
 > # zpool list
 > no pools available
 > # swap -d /dev/md/dsk/d2
 > 
 > This was shortly after installation with not much running on the machine. To 
 > speed up 'mirroring' of swap I usually do the following (but couldn't in 
 > this case):
 > 
 > swap -d /dev/md/dsk/d2
 > metaclear d2
 > metainit d2 -m d21 d22 0
 > swap -a /dev/md/dsk/d2
 > 
 > 
 > 
 > 
 > Daniel
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to