> If you turn on kernel memory auditing with the kmem_flags variable, > you can use the ::kmausers dcmd in mdb to see which kernel stacks > resulted in the most memory allocations from each kmem cache. To > enable auditing, add 'set kmem_flags=0x1' to /etc/system and reboot. > > > We have a V490 server and running short of memory, (It has 32GB > > memory, a lot of zones running sybase inside, at times, we see in > > vmstat that sr is not 0, anon paging is not 0) > > > > We are trying to figure out who (threads/processes) are holding those > > caches > > (kmem_alloc_160, 192, 96 etc.) but don't know how to do that. KMA > > debug flag and allocator logging is not turned on, > > > > we tried "::walk kmem_alloc_160", and "0x......$<kmem_cache" but can't > > obviously see who has been using those buffers (kmem_alloc_160, 192, > > 96 etc.) > > > > We tried dtrace for kmem_alloc:entry and lquantize the buffer size > > (arg0), it is also not obvious, plus it is not the buffers already got > > allocated before the dtrace script starts.
The kmem audit facility consumes additional kernel memory. If you're already concerned about running out of memory, I would only use kmem audit as a last resort. Dtrace is the right idea; however, I would suggest a different aggregation method than lquantize. Something like the following would be a good starting point: # dtrace -n 'fbt::kmem_alloc:entry { @a[stack()] = sum(args[0]); } END { trunc(@a, 20) }' This will print the top 20 thread stacks that have allocated the most memory. From this data, you may see a common pattern. It's also possible to modify the aggregation to break up stacks by execname, in the case that some program is inducing the overhead. There's an example of this given below. # dtrace -n 'fbt::kmem_alloc:entry { @a[execname, stack()] = sum(args[0]); } END { trunc(@a, 20) }' -j _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org