On Sunday, 7 May 2023 11:27:14 BST Peter Humphrey wrote:
> On Saturday, 6 May 2023 19:18:25 BST Jack wrote:

> > I hope I'm not preaching to the choir, and I have NOT reread the
> > various man pages, but the different options you mention (and some you
> > don't) apply to different parts of the process.  Some tell emerge
> > whether or not to start working on another package, but once it starts
> > the process, it has no control over how busy the machine can get.  Then
> > there are those that get passed to make.  I wouldn't think so, but are
> > you possibly confusing the two?  Lastly, I don't see that those that
> > apply to make would have any effect on packages that use ninja instead,
> > so that might also contribute to the issue..
> 
> Yes, I understand all that; it all points to the confusion encouraged by the
> man pages.

As I understand it and have so far confirmed on my systems, the --jobs 
directive explained on the emerge man page, places a limit of how many 
different non-dependent packages will be emerged in parallel at any time, by 
any single emerge invocation.  Where there are inter-dependencies between 
packages, they will be built sequentially and in an appropriate order as per 
the dependency graph emerge will determine and therefore the number of emerge 
jobs could be lower than specified by the user.

If no jobs are specified on the command line, the EMERGE_DEFAULT_OPTS variable 
in make.conf will be sourced instead.  If --jobs is given but left empty, then 
the number of parallel emerges will be unlimited and will swamp the CPU - see 
--load-average next.

The --load-average directive in emerge specifies the average number of 
packages emerge will try to build at any time.  This number determines if a 
new package build will start by emerge at any point in time.  I don't know 
over what period of time such a load average is calculated.  It is recommended 
to set the load-average at the number CPU-cores x 0.9 times to maintain some 
system responsiveness.

In addition to the above, we can specify --jobs and --load-average in MAKEOPTS 
within make.conf.  These directives will determine how many 'make' commands 
will be allowed to run concurrently when emerge sources the MAKEOPTS variable.

So, if you have set MAKEOPTS="-j 10" and then run 'emerge --jobs=10" you will 
see up to 10x10=100 parallel make tasks in your top output, while the load 
average on e.g. a 100-core CPU will show 1.00.

I understand --jobs is used to provide a hard limit, i.e. an instruction to 
NOT run any more than the specified parallel package builds, multiplied by the 
specified make commands.

The --load-average is an instruction to keep starting more builds and/or run 
more make commands, to keep the system busy up to the specified average load.

Enough about the theory, what about its application?  I cannot answer why in 
your experiment the no-jobs plus 40 load average ended up being 35 seconds 
longer.  The no jobs implies emerge would continue to increase package builds 
as long as inter-dependencies between them and available resources allowed.  
Assuming no other processes were using up resources on both runs, the no jobs 
experiment should have completed the work sooner.  Since you observed no usage 
of swap took place, clearly resources were not exhausted.  Could it be the 
calculation of the average load introduced some loopback and hysteresis 
inefficiencies as the average floating number was building up-overshooting-
cutting down, compared with the previous hard limit for number of jobs?  I 
don't know.

The way I set my systems, admittedly with only a fraction of your resources,  
is by setting only the MAKEOPTS variable.  I set the jobs number at the number 
of CPU threads + 1, or 2xCPU threads +1 and load average at 0.9 of the jobs 
number.  Smaller packages do not exhaust my RAM, but monsters like Chromium/
qtwebengine at more than 2G per job easily do.  For these I set specific 
limits in the number of jobs in /etc/portage/env/ and keep an eye on how much 
swap may increase as gcc versions evolve.  If I notice swap is thrashing the 
disk I dial down the jobs number accordingly.




Reply via email to