On Tue, 19 Sep 2023 17:14:42 +0800
William Kenworthy <bi...@iinet.net.au> wrote:

> That is where you set per package compiler parameters by overriding
> make.conf settings.
>
> BillK
>
>
I would argue, that per package compiler parameters is not what is
needed, because in the example of chromium 99% of the compile time can
be done with -j16 on my machine, but at a very short time I would need
to run with -j1, because I otherwise run out of memory otherwise.
In short: I want to run with as many jobs as I have cores, as long as
I do not run out of memory, and when I run out of memory I want to run
with as little jobs as possible until the pressure on the memory is
gone. Then I want to continue with as many jobs as possible.

And this is not something that make / ninja provide. They have a
concept of global number of jobs, which in this concept must be set to
the maximum number that your RAM can take at the very short period in
time where you have a high watermark on your RAM, but that number would
be at 99% of the compilation time way too low.

FWIW, I have a hacky solution that I use privately, but I never
published it anywhere, because it could break some builds, and at the
moment I'm not ready to support it.

Basically it tries to run with as many jobs as the number of CPU cores
at all times. It watches memory pressure in the background and
kills build jobs as soon as a high watermark is reached.
At this point, make would normally exit, because a build job failed.
However my hacky solution overrides the exec-family of system calls,
and if a job fails, it is being retried exclusively, i.e. no other
build job is allowed to run at the same time as the failed job.
It fails ultimately, when the second and exclusive run fails too.
This way, if the job failed only because of lack of memory, it will be
retried exclusively and succeeds. If it failed due to a programming
error, it will fail also the second time, and then the error is
forwarded to make.


Reply via email to