I've been musing recently on capacity planning for systems which see
very peaky workloads. For example, in the financial markets, a very busy
day can be much, much busier than an average day; this means you need to
plan for relatively low utilisation on the average day, in order to give
headroo
David McDaniel wrote:
Not sure if this is really the right board for this, but here goes. In a
performance critical, highly available application, when a misbehaving process
core dumps the creation of the corefile (gigabytes in size) puts a lot of
pressure on the abililty to restart and recov
Am I the only one having these problems? Could someone please try
www.sun.com and see if the images are being drawn normally, on x86 and
Xorg. If so, what kind of graphics card do you have?
I'm seeing the same sort of behaviour on an IBM ThinkPad R52 - graphics
card is ATI Radeon Mobility X
> You can get the latency information using lgrpinfo -l
> command. This will
> extract the latency table from the kernel. Note that these
> latencies are just
> approximation of the real node-to-memory latencies, but they
> do give idea of relative latencies. On sparc systems these
> latencie
> More cpmments for pmadvise:
The product I work on is already shipping something which does the
equivalent of a small subset of madvise, so it would be generally useful to
us.
I have to echo the other positive comments about the lgrp tools. It's
shedding light on a whole area which has previousl
> The question is how to lower the CPU utilization of an application.
> - Is there any general rules?
Wow - this is a very, very general question. There are, however, a couple of
general rules which usually stand you in good stead when looking at
optimisation:
* There are only two ways to make so