If you turn on kernel memory auditing with the kmem_flags  variable,
you can use the ::kmausers dcmd in mdb to see which kernel stacks
resulted in the most memory allocations from each kmem cache.  To
enable auditing, add 'set kmem_flags=0x1' to /etc/system and reboot.

The possible settings for kmem_flags are described at:

http://docs.sun.com/app/docs/doc/806-4015/6jd4gh8et?l=en&q=kmem_flags&a=view

The ::kmausers dcmd is describe at:

http://docs.sun.com/app/docs/doc/816-5041/modules-63?l=en&q=kmausers&a=view

This won't show you specific threads, but the stack traces will probably
point to a specific subsystem that is driving the memory utilization.

HTH,
David Lutz

----- Original Message -----
From: "Haiou Fu (Kevin)" <fuha...@yahoo.com>
Date: Wednesday, December 31, 2008 2:07 pm

> We have a V490 server and running short of memory, (It has 32GB 
> memory, a lot of zones running sybase inside, at times, we see in 
> vmstat that sr is not 0, anon paging is not 0)
> 
> ::memstat shows 55% memory used by kernel,  which we don't think is normal.
> (I remember someone says 15% kernel memory usage is about the average 
> unless in-kernel web server caching in use, which is not our case.)
> 
> r...@adriatic:~# mdb -k
> Loading modules: [ unix krtld genunix specfs dtrace ufs ssd fcp fctl 
> pcisch md sd isp ip sctp usba nca random ipc cpc wrsmd crypto fcip 
> logindmux ptm sppp lofs nfs ]
> > ::memstat
> Page Summary                Pages                MB  %Tot
> ------------     ----------------  ----------------  ----
> Kernel                    2284580             17848   55%
> Anon                      1315098             10274   31%
> Exec and libs                8999                70    0%
> Page cache                 428147              3344   10%
> Free (cachelist)            55604               434    1%
> Free (freelist)             86325               674    2%
> 
> Total                     4178753             32646
> Physical                  4111623             32122
> 
> ::kmastat shows following cache/buffers are top offenders:
> cache                        buf    buf    buf    memory     alloc 
> alloc 
> name                        size in use  total    in use   succeed  
> fail 
> ------------------------- ------ ------ ------ --------- --------- 
> ----- 
> kmem_alloc_160               160 18989494 18991400 3111550976 
> 3197504567     0 
> kmem_alloc_192               192 14571267 14571816 2842198016 
> 1040216225     0 
> kmem_alloc_96                 96 26788147 26797596 2613403648 
> 2770719350     0 
> kmem_alloc_112               112 20794701 20798784 2366439424 
> 1419617084     0 
> kmem_alloc_128               128 13589202 13590234 1767161856 
> 3413863438     0 
> kmem_alloc_80                 80 15595955 15666110 1270661120 
> 1255159188     0 
> kmem_alloc_8192             8192 137262 137543 1126752256 1045473274   
>   0 
> kmem_alloc_224               224 4857609 4857912 1105444864 2902379493 
>     0 
> ......
> (Add them up, it is about 16GB, which is about 50% of physical memory)
> 
> We are trying to figure out who (threads/processes) are holding those 
> caches
> (kmem_alloc_160, 192, 96 etc.) but don't know how to do that. KMA 
> debug flag and  allocator logging is not turned on, 
> 
> we tried "::walk kmem_alloc_160", and "0x......$<kmem_cache" but can't 
> obviously see who has been using those buffers (kmem_alloc_160, 192, 
> 96 etc.)
> 
> We tried dtrace for kmem_alloc:entry and lquantize the buffer size 
> (arg0), it is also not obvious, plus it is not the buffers already got 
> allocated before the dtrace script starts.
> 
> Any advise is greatly appreciated!  And wish everyone have a happy and 
> successfully new year of 2009!
> -- 

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to