Mika Borner wrote:
>
> Mika Borner wrote:
> >Darren Reed wrote:
> >>
> >>Is any of this driven out of inetd?
> >>
> >>
> >No, its a separate process, that spawns LWPs
> >
>
> I have now trussed the irresponsive process and can see:
>
> /3:lwp_park(0xFEFD7DA8, 0)Err#62 ETIME
>
I think the results so far point out the difficulty of trying to
investigate this issue with dtrace. If the system has already
reached a steady state of kernel memory utilization, tracking new
kmem_alloc() calls isn't likely to reveal anything interesting,
since those allocations are probably bein
tack traces will probably
point to a specific subsystem that is driving the memory utilization.
HTH,
David Lutz
- Original Message -
From: "Haiou Fu (Kevin)"
Date: Wednesday, December 31, 2008 2:07 pm
> We have a V490 server and running short of memory, (It has 32GB
> m
If your memory reservations exceed your backing store, the additional
reservations are being made against physical RAM. That means you
have physical memory that is not available for active use, either for
memory allocations or for things like file system buffering. I hesitate
to say that it makes
f you aren't using DISM, you should check the amount of reserved memory
reported by "swap -s" then track down where the reservations are going.
You can use "pmap -S" to report reservations.
HTH,
David Lutz
- Original Message -
From: Awais Vaseer <[EMAIL PROTECTED
If your application is single threaded, you could try using the
bsdmalloc library. This is a fast malloc, but it is not multi-thread
safe and will also tend to use more memory than the default
malloc. For a comparison of different malloc libraries, look
at the NOTES section at the end of umem_al
cache line boundary, and
that the structures handed to each thread are also cache line aligned. It
is often worth adding a little padding to each structure if necessary to keep
the alignment you need. If not, you can introduce a cache line ping pong
game between cores.
David Lutz
- Original