Bob Friesenhahn wrote:
SPOT says that most time is spent executing libjpeg code (primarily
ycc_rgb_convert) and that there is quite a lot of application stall with
"LD/ST Unit Full" at a wopping 49.7%. When using -lumem, the program
seems to spend 45% of the time waiting. This is definitely
Regarding the questions Eric raised, I think we can answer "What/who is it
waiting on" by adding a new view like:
Process XYZ:Total: 80
msec
Type:Name Count AverageMax
Mutex:ff02532845f
On Tue, 12 May 2009, Roland Mainz wrote:
Can you check whether the memory allocator in libast performs better in
this case (e.g. compile with $ cc -I/usr/include/ast/ -last ... # (note:
libast uses a |_ast_|-prefix for all symbols and does (currently) not
act as |malloc()| interposer)).
I assu
On Mon, May 11, 2009 at 08:35:40PM -0500, Bob Friesenhahn wrote:
>>> Yes. I don't know what libjpeg itself does, but GraphicsMagick should
>>> be performing a similar number of allocations (maybe 1000 small
>>> allocations) regardless of the size of the JPEG file.
>>
>> There are some known issues
On Mon, 11 May 2009, johan...@sun.com wrote:
I'm not disagreeing that the user time between umem and mapmalloc is
very similar. However, page-fault time should be attributed as system
time. Chad Mynhier recently putback some improvements to ptime(1), that
show information about the microstate a
On Mon, May 11, 2009 at 06:42:55PM -0500, Bob Friesenhahn wrote:
> On Mon, 11 May 2009, johan...@sun.com wrote:
>>
>> I'm not entirely convinced that this is simply the difference between
>> memory mapped allocations versus sbrk allocations. If you compare the
>> numbers between malloc and umem, n
On Tue, May 12, 2009 at 01:33:15AM +0200, Roland Mainz wrote:
> Can you check whether the memory allocator in libast performs better in
> this case (e.g. compile with $ cc -I/usr/include/ast/ -last ... # (note:
> libast uses a |_ast_|-prefix for all symbols and does (currently) not
> act as |malloc
On Mon, 11 May 2009, johan...@sun.com wrote:
I'm not entirely convinced that this is simply the difference between
memory mapped allocations versus sbrk allocations. If you compare the
numbers between malloc and umem, notice that the overall increase in
real time is due to the extra 5 seconds s
Roland Mainz wrote:
> johan...@sun.com wrote:
> > On Mon, May 11, 2009 at 11:10:37AM -0500, Bob Friesenhahn wrote:
[snip]
> > There have been some past discussions on this list about identifying
> > problems with memory allocations, and applications that allocate memory
> > inefficiently. If your
johan...@sun.com wrote:
> On Mon, May 11, 2009 at 11:10:37AM -0500, Bob Friesenhahn wrote:
> > It seems that the performance issue stems from libumem using memory
> > mapped allocations rather than sbrk allocations. I have not seen a
> > performance impact from using libumem in any other part of t
On Mon, May 11, 2009 at 11:10:37AM -0500, Bob Friesenhahn wrote:
> It seems that the performance issue stems from libumem using memory
> mapped allocations rather than sbrk allocations. I have not seen a
> performance impact from using libumem in any other part of the software.
> The perform
On Mon, 11 May 2009, Adam Zhang wrote:
Hi Bob,
The standard libc's malloc is thead-safe. If the applications doesn't have
the memory locks across different threads, I don't think libumem may help its
performance.
It seems that the performance issue stems from libumem using memory
mapped al
12 matches
Mail list logo