On Mon, Nov 28, 2011 at 12:26 AM, Pandu Poluan <pa...@poluan.info> wrote:
> On Nov 28, 2011 11:32 AM, "Michael Mol" <mike...@gmail.com> wrote:

[snip]

> Unfortunately, striving for 2*N will inadvertently result in short bursts of
>> 2*N, and this potentially induce a stall, which will be very costly. 1.8*N
> gives a 10% margin for burst activities, while 1.6*N gives a 20% margin.

Actually, I suspect that 2*N is normally beneficial as a way to cover
for processes blocked on I/O.

I don't believe processes blocked on I/O are counted in the system's
instantaneous load, so they wouldn't be noticed when the kernel polls
to build a load average.

That tells me that, for a load-aware system, you want N for your
load-aware calculations, and 2*N (or thereabouts) for numbers which
aren't load-aware. Make's -j parameter would be an example of the
latter, as it serves as an upper limit when the load average is low,
and -l's algorithm keeps saying "go on, go on!"

I'm currently timing

MAKEOPTS="-j16 -l13"
EMERGE_DEFAULT_OPTS="--jobs --load-average=13"

with 493 packages (base plus X plus XFCE and chromium, and, of course,
USE flags), but I'll start another timed run with

MAKEOPTS="-j16 -l8"
EMERGE_DEFAULT_OPTS="--jobs --load-average=8"

once that's finished. Last night, I tried with -j16 -l10, and that
completed in 209 minutes, but that was still with the
PORTAGE_DEFAULT_OPTS typo, so that datapoint is mostly useless. This
one has already taken about 240 minutes. At least it's finished
building Chromium, now; I hope it doesn't still need to build gcc.
It's at 488/493.

(insert) Just finished:

real    208m23.880s
user    604m27.065s
sys     152m22.848s

Apparently, I misremembered when I started it.
(/insert)

FWIW, I strongly suspect that N should be your number of *logical*
cores, not your number of physical cores. I believe most of the
overhead

-- 
:wq

Reply via email to