On Wed, Sep 12, 2012 at 12:33 PM, Hans de Graaff <gra...@gentoo.org> wrote:

> On Wed, 2012-09-12 at 08:58 -0400, Ian Stakenvicius wrote:
>
> > So essentially what you're saying here is that it might be worthwhile
> > to look into parallelism as a whole and possibly come up with a
> > solution that combines 'emerge --jobs' and build-system parallelism
> > together to maximum benefit?
>
> Forget about jobs and load average, and just keep starting jobs all
> around until there is only 20% (or whatever tuneable amount) free memory
> left. As far as I can tell this is always the real bottleneck in the
> end. Once you hit swap overall throughput has to go down quite a bit.
>
> I've been thinking about this, but that only works until you get to the
huge link step of, e.g. chromium, firefox, libreoffice.

I've had programs with memory leaks in the past, but I've never seen a
program validly consume as much memory as ld during those builds.

To cover something like that, you would need to be able to freeze and swap
out an entire process (such as ld) to allow something else to complete
quickly...but there's no good way I can think of to prioritize sanely
between the one big process and the few dozen smaller ones which might be
allowed to spawn and complete first.

-- 
:wq

Reply via email to