On Mon, Sep 18, 2023 at 12:13 PM Alan McKinnon <alan.mckin...@gmail.com> wrote:
>
> Whether you just let emerge do it's thing or try get it to do big packages on 
> their own, everything is still going to use the same number of cpu cycles 
> overall and you will save nothing.

That is true of CPU, but not RAM.  The problem with large parallel
builds is that for 95% of packages they're fine, and for a few
packages they'll eat up all the RAM in the system until the OOM killer
kicks in, or the system just goes into a swap storm (which can cause
panics with some less-than-perfect kernel drivers).

I'm not aware of any simple solutions.  I do have some packages set to
just build with a small number of jobs, but that won't prevent other
packages from being built alongside them.  Usually that is enough
though.  It is just frustrating to watch a package take all day to
build because I can't use more than -j2 or so without running out of
RAM, usually just at one step of the build process.

I can't see anybody bothering with this, but in theory packages could
have a variable to hint at the max RAM consumed per job, and the max
number of jobs it will run.  Then the package manager could take the
lesser of -j and the max jobs the package can run, multiply it by the
RAM requirement, and compare that to available memory (or have a
setting to limit max RAM).  Basically treat RAM as a resource and let
the package manager reduce -j to manage it if necessary.

Hmm, I guess a workaround would be to set ulimits on the portage user
so that emerge is killed before RAM use gets too out of hand.  That
won't help complete builds, but it would at least keep it from killing
the system.

-- 
Rich

Reply via email to