On 05 January, 2007 - Mark Maybee sent me these 2,9K bytes:

> Tomas Ögren wrote:
> >On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
> >
> >>So it looks like this data does not include ::kmastat info from *after*
> >>you reset arc_reduce_dnlc_percent.  Can I get that?
> >
> >Yeah, attached. (although about 18 hours after the others)
> >
> Excellent, this confirms #3 below.

Theories that match reality is a big plus :)

> >>What I suspect is happening:
> >>    1 with your large ncsize, you eventually ran the machine out
> >>      of memory because (currently) the arc is not accounting for
> >>      the space consumed by "auxiliary" caches (dnode_t, etc.).
> >>    2 the arc could not reduce at this point since almost all of
> >>      its memory was tied up by the dnlc refs.
> >>    3 when you eventually allowed the arc to reduce the dnlc size,
> >>      it managed to free up some space, but much of this did not
> >>      "appear" because it was tied up in slabs in the auxiliary
> >>      caches (fragmentation).

> >Any idea where all the memory is going? I sure hope that 500k dnlc
> >entries (+dnode_t's etc belonging to that) isn't using up about 2GB
> >ram..?
> >
> Actually, thats pretty much what is happening:
>       500k dnlc => 170MB in the vnodes (vn_cache)
>                  + 320MB in znode_phys data (zio_buf_512)
>                  + 382MB in dnode_phys data (zio_buf_16384)
>                  + 208MB in dmu bufs (dmu_buf_impl_t)
>                  + 400MB in dnodes (dnode_t)
>                  + 120MB in znodes (zfs_znode_cache)
>                 ---------
>       total       1600MB
> 
> These numbers come from the last ::kmastat you ran before reducing the
> DNLC size.  Note below that much of this space is still consumed by
> these caches, even after the DNLC has dropped it references.  This is
> largely due to fragmentation in the caches.

http://www.acc.umu.se/~stric/tmp/dnlc-plot.png

They seem kinda related.. (buffers are the ones you mentioned here)..
but the pike at the end doesn't look very nice.. Also, about 3kB per
entry is slightly over those 64 bytes that UFS uses according to the
docs, which isn't too promising for a fileserver with less than 32GB ram
right now :)

> >------------------------------------------------------------------------
> >
> >cache                        buf    buf    buf    memory     alloc alloc 
> >name                        size in use  total    in use   succeed  fail 
> >------------------------- ------ ------ ------ --------- --------- ----- 
> >vn_cache                     240 405388 657696 173801472    948191     0 
> ...
> >zio_buf_512                  512 137801 294975 161095680  43660052     0 
> ...
> >zio_buf_16384              16384   6692   6697 109723648   5877279     0 
> ...
> >dmu_buf_impl_t               328 145260 622392 212443136  65461261     0 
> >dnode_t                      640 137799 512508 349872128  37995548     0 
> ...
> >zfs_znode_cache              200 137763 568040 116334592  35683478     0 

I can continue to do plots/dumps of these metrics.. I'll try locking in
200k entries or so and see what happens..

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to