Hello,
we have a massively multithreaded daemon(more than 100lpws) running on solaris
9, E25000, 64 bit. we use umem via LD_PRELOAD_64, ie there are no explicit
calls to umem. it performs fine during the day, however at the start of real
work, when data start to come in, a lot of threads (10-15
Thank you for your feedback, Ill check this option out, however, I guess
possible side effect is higher memory consumption? what is the default setting
for sbrk_pagesize?
currently process is 4-6gb and 18gb free ram, so I guess we are safe in any
case, but better to be prepared (its production,
both mtmalloc and umem have more or less equal debugging abilities.
both mtmalloc and umem are MT optimized allocators.
difference is that with mtmalloc much more memory consumed.
This message posted from opensolaris.org
___
perf-discuss mailing list
per
Thank you Bryan for information, I missed leaks detection, unfortunately its
just mentioned in man page, but not described in any way.
however we use umem for its MT optimized allocation abilities. we used mtmalloc
before with solaris 8 (and still use on those site who still on sol8), but
switc
Hello,
there is daemon which has about 30 threads using same RW lock. most of the time
rw lock is read locked by several threads. it worx as expected in all setups
except one. customer with 16 ultra IV cpus had terrible performance problems
with this daemon. investigation revealed that half of