+1 for Alex's comment.
As I/O (and potentially RAM) is clearly the bottleneck here, limitation to some arbitrary value doesn't address the issue at the right end.
I would rather see real resource management.
From my point of view there is a huge difference if I package like 128 shell scripts in parallel or invoke 128 gcc linker runs

On 03.12.20 19:20, Alexander Kanavin wrote:
I'd rather teach bitbake to abstain from starting new tasks when I/O or CPU gets tight.

Alex

On Thu, 3 Dec 2020 at 18:48, Ross Burton <r...@burtonini.com <mailto:r...@burtonini.com>> wrote:

    Hi,

    Currently, BB_NUMBER_THREADS and PARALLEL_MAKE use the number of cores
    available unless told otherwise.  This was a good idea six years
    ago[1] but some modern machines are moving to very large core counts.

    For example, 88 core dual Xeons are fairly common. A ThunderX2 has 256
    cores (2 sockets, 4 hyperthreads per physical core). The Ampere Altra
    is dual socket 2*80=160 cores.

    At this level of parallelisation the sheer amount of I/O from the
    unpack storm is quite excessive.  As a strawman argument, I propose a
    hard cap to the default BB_NUMBER_THREADS of -- and I'm literally
    making up numbers here -- 32.  Maybe 64.  Comments?

    Cheers,
    Ross

    [1]
    
http://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/?id=1529ef0504542145f2b81b2dba4bcc81d5dac96e







-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#145249): 
https://lists.openembedded.org/g/openembedded-core/message/145249
Mute This Topic: https://lists.openembedded.org/mt/78690216/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to