If I understand it correctly, the alloc fail for sbrk_top is just an indication that the heap had to be grown, which is different than other failures, which would indicate that we ran out of memory.
Have a look at: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libumem/common/vmem_sbrk.c David ----- Original Message ----- From: "Pavesi, Valdemar (NSN - US/Boca Raton)" <valdemar.pav...@nsn.com> Date: Friday, January 16, 2009 3:24 pm > Hello, > > I have a example of memory leak. > > What does means the alloc fail= 335 ? > > > # mdb -p 1408 > Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 libuutil.so.1 ] > > ::findleaks -dv > findleaks: maximum buffers => 14920 > findleaks: actual buffers => 14497 > findleaks: > findleaks: potential pointers => 316574898 > findleaks: dismissals => 309520985 (97.7%) > findleaks: misses => 6929221 ( 2.1%) > findleaks: dups => 110601 ( 0.0%) > findleaks: follows => 14091 ( 0.0%) > findleaks: > findleaks: elapsed wall time => 54 seconds > findleaks: > BYTES LEAKED VMEM_SEG CALLER > 4096 4 fffffd7ffc539000 MMAP > 16384 1 fffffd7ffe83d000 MMAP > 4096 1 fffffd7ffe812000 MMAP > 8192 1 fffffd7ffd7bc000 MMAP > 24016 397 124a2a0 libstdc++.so.6.0.8`_Znwm+0x1e > ------------------------------------------------------------------------ > Total 401 oversized leaks, 9567120 bytes > > CACHE LEAKED BUFCTL CALLER > 00000000004cf468 1 000000000050ed20 libstdc++.so.6.0.8`_Znwm+0x1e > 00000000004cf468 1 000000000050c000 libstdc++.so.6.0.8`_Znwm+0x1e > 00000000004cf468 1 000000000050ea80 libstdc++.so.6.0.8`_Znwm+0x1e > 00000000004cf468 1 000000000050c0e0 libstdc++.so.6.0.8`_Znwm+0x1e > 00000000004cf468 1 000000000050ee00 libstdc++.so.6.0.8`_Znwm+0x1e > ---------------------------------------------------------------------- > Total 5 buffers, 80 bytes > > mmap(2) leak: [fffffd7ffc539000, fffffd7ffc53a000), 4096 bytes > mmap(2) leak: [fffffd7ffe83d000, fffffd7ffe841000), 16384 bytes > mmap(2) leak: [fffffd7ffe812000, fffffd7ffe813000), 4096 bytes > mmap(2) leak: [fffffd7ffd7bc000, fffffd7ffd7be000), 8192 bytes > umem_oversize leak: 397 vmem_segs, 24016 bytes each, 9534352 bytes total > ADDR TYPE START END SIZE > THREAD TIMESTAMP > 124a2a0 ALLC 1252000 1257dd0 24016 > 1 56bd6f2a6fe1 > libumem.so.1`vmem_hash_insert+0x90 > libumem.so.1`vmem_seg_alloc+0x1c4 > libumem.so.1`vmem_xalloc+0x50b > libumem.so.1`vmem_alloc+0x15a > libumem.so.1`umem_alloc+0x60 > libumem.so.1`malloc+0x2e > libstdc++.so.6.0.8`_Znwm+0x1e > libstdc++.so.6.0.8`_Znam+9 > > > > > ::umastat > cache buf buf buf memory alloc alloc > name size in use total in use succeed fail > ------------------------- ------ ------ ------ --------- --------- ----- > umem_magazine_1 16 5 101 4096 6 > 0 > umem_magazine_3 32 356 378 24576 356 > 0 > umem_magazine_7 64 20 84 8192 92 > 0 > umem_magazine_15 128 11 21 4096 11 > 0 > umem_magazine_31 256 0 0 0 0 > 0 > umem_magazine_47 384 0 0 0 0 > 0 > umem_magazine_63 512 0 0 0 0 > 0 > umem_magazine_95 768 0 0 0 0 > 0 > umem_magazine_143 1152 0 0 0 0 > 0 > umem_slab_cache 56 638 650 53248 638 > 0 > umem_bufctl_cache 24 0 0 0 0 > 0 > umem_bufctl_audit_cache 192 15328 15336 3489792 15328 > 0 > umem_alloc_8 8 0 0 0 0 > 0 > umem_alloc_16 16 79 170 8192 2098631 > 0 > umem_alloc_32 32 267 320 20480 306 > 0 > umem_alloc_48 48 4653 4692 376832 6028 > 0 > umem_alloc_64 64 5554 5568 712704 12642 > 0 > umem_alloc_80 80 2492 2520 286720 5185 > 0 > umem_alloc_96 96 492 512 65536 654 > 0 > umem_alloc_112 112 95 112 16384 103 > 0 > umem_alloc_128 128 38 42 8192 42 > 0 > umem_alloc_160 160 12 21 4096 86 > 0 > umem_alloc_192 192 2 16 4096 2 > 0 > umem_alloc_224 224 5 16 4096 848 > 0 > umem_alloc_256 256 1 12 4096 1 > 0 > umem_alloc_320 320 7 1010 413696 560719 > 0 > umem_alloc_384 384 34 36 16384 41 > 0 > umem_alloc_448 448 5 8 4096 10 > 0 > umem_alloc_512 512 1 7 4096 2 > 0 > umem_alloc_640 640 11 22 16384 16 > 0 > umem_alloc_768 768 2 9 8192 424 > 0 > umem_alloc_896 896 1 4 4096 2 > 0 > umem_alloc_1152 1152 11 20 24576 127 > 0 > umem_alloc_1344 1344 4 40 61440 17179 > 0 > umem_alloc_1600 1600 3 7 12288 5 > 0 > umem_alloc_2048 2048 2 9 20480 6 > 0 > umem_alloc_2688 2688 5 7 20480 10 > 0 > umem_alloc_4096 4096 6 7 57344 335 > 0 > umem_alloc_8192 8192 118 119 1462272 565 > 0 > umem_alloc_12288 12288 20 21 344064 485 > 0 > umem_alloc_16384 16384 1 1 20480 1 > 0 > ------------------------- ------ ------ ------ --------- --------- ----- > Total [umem_internal] 3584000 16431 > 0 > Total [umem_default] 4001792 2704455 > 0 > ------------------------- ------ ------ ------ --------- --------- ----- > > vmem memory memory memory alloc alloc > name in use total import succeed fail > ------------------------- --------- ---------- --------- --------- ----- > sbrk_top 25309184 25399296 0 3192 335 > sbrk_heap 25309184 25309184 25309184 3192 > 0 > vmem_internal 2965504 2965504 2965504 366 > 0 > vmem_seg 2875392 2875392 2875392 351 > 0 > vmem_hash 51200 53248 53248 7 > 0 > vmem_vmem 46200 55344 36864 15 > 0 > umem_internal 3788864 3792896 3792896 900 > 0 > umem_cache 42968 57344 57344 41 > 0 > umem_hash 142336 147456 147456 36 > 0 > umem_log 131776 135168 135168 3 > 0 > umem_firewall_va 0 0 0 0 > 0 > umem_firewall 0 0 0 0 > 0 > umem_oversize 14130869 14413824 14413824 1286 > 0 > umem_memalign 0 0 0 0 > 0 > umem_default 4001792 4001792 4001792 638 > 0 > ------------------------- --------- ---------- --------- --------- ----- > > > > > -----Original Message----- > From: dtrace-discuss-boun...@opensolaris.org > [mailto:dtrace-discuss-boun...@opensolaris.org] On Behalf Of ext David > Lutz > Sent: Friday, January 16, 2009 6:07 PM > To: venkat > Cc: dtrace-discuss@opensolaris.org > Subject: Re: [dtrace-discuss] C++ Applications with Dtrace > > Hi Venkat, > > I believe "alloc succeed" is a count of memory requests that > were successful. That memory may have been freed later, > so it doesn't necessarily point to the reason for a growing > memory foot print. The column to be concerned with is > "memory in use". > > David > > ----- Original Message ----- > From: venkat <venki.dammalap...@gmail.com> > Date: Friday, January 16, 2009 2:44 pm > > > Hi david, > > > > > > What is allocated succeed block from umastat dcmd . that value > is > > > keep on increasing . Is that memory occupieng by process? > > like that way my process memory usage also keep on increasing ? > > > > can u clarify plz ? > > > > > > Thanks, > > Venkat > > -- > > This message posted from opensolaris.org > > _______________________________________________ > > dtrace-discuss mailing list > > dtrace-discuss@opensolaris.org > _______________________________________________ > dtrace-discuss mailing list > dtrace-discuss@opensolaris.org _______________________________________________ dtrace-discuss mailing list dtrace-discuss@opensolaris.org