On Mon, May 11, 2009 at 11:10:37AM -0500, Bob Friesenhahn wrote:
> It seems that the performance issue stems from libumem using memory  
> mapped allocations rather than sbrk allocations.  I have not seen a  
> performance impact from using libumem in any other part of the software.  
> The performance hit in libjpeg is pretty severe.
>>
>> -lmalloc:
>> real    0m16.134s
>> user    0m12.905s
>> sys     0m2.253s
>> 
>> -lmtmalloc:
>> real    0m16.275s
>> user    0m12.969s
>> sys     0m2.267s
>> 
>> -lumem:
>> real    0m21.003s
>> user    0m17.744s
>> sys     0m2.251s
>> 
>> -lmapmalloc:
>> real    0m21.023s
>> user    0m17.830s
>> sys     0m3.161s

I'm not entirely convinced that this is simply the difference between
memory mapped allocations versus sbrk allocations.  If you compare the
numbers between malloc and umem, notice that the overall increase in
real time is due to the extra 5 seconds spent in user time in umem.  

However, mapmalloc increases both the system and user time when compared
to malloc. 

There have been some past discussions on this list about identifying
problems with memory allocations, and applications that allocate memory
inefficiently.  If your application is performing a lot of small memory
allocations, it's possible that you're seeing poor performance for that
reason.

        http://mail.opensolaris.org/pipermail/perf-discuss/2008-May/003393.html

I would take a look at how your application is allocating memory, and
where it's spending its time in these two different libraries.  That
might shed some light on what the problem is.

-j
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to