On Mon, 7 Feb 2005, Jan Kasprzak wrote:
>
>The server has been running 2.6.11-rc2 + patch to fs/pipe.c
>for last 8 days. 
> 
> # cat /proc/meminfo
> MemTotal:      4045168 kB
> Cached:        2861648 kB
> LowFree:         59396 kB
> Mapped:         206540 kB
> Slab:           861176 kB

Ok, pretty much everything there and accounted for: you've got 4GB of 
memory, and it's pretty much all in cached/mapped/slab. So if something is 
leaking, it's in one of those three.

And I think I see which one it is:

> # cat /proc/slabinfo
> slabinfo - version: 2.1
> # name            <active_objs> <num_objs> <objsize> <objperslab> 
> <pagesperslab> : tunables <batchcount> <limit> <sharedfactor> : slabdata 
> <active_slabs> <num_slabs> <sharedavail>
> biovec-1          5506200 5506200     16  225    1 : tunables  120   60    8 
> : slabdata  24472  24472    240
> bio               5506189 5506189    128   31    1 : tunables  120   60    8 
> : slabdata 177619 177619    180

Whee. You've got 5 _million_ bio's "active". Which account for about 750MB
of your 860MB of slab usage.

Jens, any ideas? Doesn't look like the "md sync_page_io bio leak", since
that would just lose one bio per md suprt block read according to you (and
that's the only one I can find fixed since -rc2). I doubt Jan has caused
five million of those..

Jan - can you give Jens a bit of an idea of what drivers and/or schedulers 
you're using?

                Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to