On 11/9/21 4:41 PM, Konrad Weihmann wrote:
On 09.11.21 09:48, Robert Yang wrote:
The original value is very easy to cause do_packge error when cpu number is
larger, for example, 128 cores and 512G mem:
error: create archive failed: cpio: write failed - Cannot allocate memory"
Set the ZSTD_THREADS to half of the CPU number can avoid the error in my
testing.
Signed-off-by: Robert Yang <liezhi.y...@windriver.com>
---
meta/conf/bitbake.conf | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/meta/conf/bitbake.conf b/meta/conf/bitbake.conf
index 71c1e52ad6..46ebf5113f 100644
--- a/meta/conf/bitbake.conf
+++ b/meta/conf/bitbake.conf
@@ -833,7 +833,7 @@ XZ_DEFAULTS ?= "--memlimit=${XZ_MEMLIMIT}
--threads=${XZ_THREADS}"
XZ_DEFAULTS[vardepsexclude] += "XZ_MEMLIMIT XZ_THREADS"
# Default parallelism for zstd
-ZSTD_THREADS ?= "${@oe.utils.cpu_count(at_least=2)}"
+ZSTD_THREADS ?= "${@int(oe.utils.cpu_count(at_least=4)/2)}"
Then why not just limit it for the large setups you are referring to in the
example, for instance like
ZSTD_THREADS ?= "${@min(int(oe.utils.cpu_count(at_least=4)), <insert a sane
value of your choice>)}"
This sounds like a good choice.
BTW this can be also done in your local.conf - as Alex already said, there is
Set it in my local.conf can make it work, but it isn't good for oe-core's OOBE
since the issue still exists.
simply no reason to make it slower for everyone, while also half the threads of
The whole build contains a lot of tasks, for lower cpu cores, the total
resources are fixed, so when zstd uses less mem, other tasks can use more.
Limit zstd's thread doesn't make the build slower, but faster in my testing,
it would be great if you can help test it. The simple testing commands are:
# Make everything ready:
$ bitbake linux-yocto
# Before the patch, set ZSTD_THREADS to 128 and run 6 times:
$ for i in `seq 0 5`; do time bitbake linux-yocto -cpackage_write_rpm -f >
before_$i.log; done
Note, the time will be printed on the screen, not the before.log, the before.log
is for strip the logs and then we can read the time easier.
What I get is:
real 2m12.079s
real 2m0.177s
real 1m52.426s
real 2m3.396s
real 2m16.018s
real 1m58.595s
Drop the first build time since it *may* contain parsing time, and for the last
five builds:
Total: 609 seconds
Average: 609 / 5.0 = 121.8
# After the patch, set ZSTD_THREADS to 64, and run 6 times:
$ for i in `seq 0 5`; do time bitbake linux-yocto -cpackage_write_rpm -f >
after_$i.log; done
What I get is:
real 1m50.017s
real 1m50.400s
real 1m53.174s
real 2m4.817s
real 1m53.476s
real 1m56.794s
Drop the first build time since it *may* contain parsing time, and for the last
five builds:
Total: 576 seconds
Average: 576 / 5.0 = 115.2
So the smaller number is faster than the larger one.
// Robert
a 128 core machine could be a trouble some setup.
Last time I had this issue (with XZ back then) I used
"${@min(int(oe.utils.cpu_count(at_least=4)), 20}"
ZSTD_THREADS[vardepvalue] = "1"
# Limit the number of threads that OpenMP libraries will use. Otherwise they
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#158004):
https://lists.openembedded.org/g/openembedded-core/message/158004
Mute This Topic: https://lists.openembedded.org/mt/86926962/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-