On Thu, Feb 16, 2023 at 5:32 AM Andreas Fink <finkandr...@web.de> wrote:
>
> On Thu, 16 Feb 2023 09:53:30 +0000
> Peter Humphrey <pe...@prh.myzen.co.uk> wrote:
>
> > Yes, I was aware of that, but why didn't --load-average=32 take precedence?
> This only means that emerge would not schedule additional package job
> (where a package job means something like `emerge gcc`) when load
> average > 32, howwever if a job is scheduled it's running, independently
> of the current load.
> While having it in MAKEOPTS, it would be handled by the make system,
> which schedules single build jobs, and would stop scheduling additional
> jobs, when the load is too high.
>
> Extreme case:
> emerge chromium firefox qtwebengine
>   --> your load when you do this is pretty much close to 0, i.e. all 3
>   packages are being merged simultaneously and each will be built with
>   -j16.
> I.e. for a long time you will have about 3*16=48 single build jobs
> running in parallel, i.e. you should see a load going towards 48, when
> you do not have anything in your MAKEOPTS.

TL;DR - the load-average option results in underdamping, as a result
of the delay with the measurement of load average.

Keep in mind that load averages are averages and have a time lag, and
compilers that are swapping like crazy can run for a fairly long time.
So you will probably have fairly severe oscillation in the load if
swapping is happening.  If your load is under 32, each of your 16
parallel makes, even if running with the limit in MAKEOPTS, will feel
free to launch another 256 jobs, because it will take seconds for the
1 minute load average to creep above 32.  At that point you have WAY
more than 32 tasks running and if they're swapping then half of the
processes on your system are probably going to start blocking.  So now
make (if configured in MAKEOPTS) will hold off on launching anything,
but it could take minutes for those swapping compiler jobs to complete
the amount of work that would normally take a few seconds.  Then as
those processes eventually start terminating (assuming you don't get
OOM killing or PANICs) your load will start dropping, until eventually
it gets back below 32, at which point all those make processes that
are just sitting around will wake up and fire off another 50 gcc
instances or whatever they get up to before the brakes come back on.

The load average setting is definitely useful and I would definitely
set it, but when the issue is swapping it doesn't go far enough.  Make
has no idea how much memory a gcc process will require.  Since that is
the resource likely causing problems it is hard to efficiently max out
your cores without actually accounting for memory use.  The best I've
been able to do is just set things conservatively so it never gets out
of control, and underutilizes CPU in the process.  Often it is only
parts of a build that even have issues - something big like chromium
might have 10,000 tasks that would run fine with -j16 or whatever, but
then there is this one part where the jobs all want a ton of RAM and
you need to run just that one part at a lower setting.

-- 
Rich

Reply via email to