Am Mittwoch, dem 13.07.2022 um 19:21 +0200 schrieb Julien Lepiller:
> I've heard that theory before. From observation on my late armhf
> server
> (two cores):
> 
> - it takes just below 2GB to build one of the derivations
> - It doesn't swap a single byte
> - whether with two or a single core, it takes roughly the same amount
> of memory
> - substitution is nice, it doesn't require lots of memory (and then -
> -
> cores is useless)
> 
> I think it's because we load all the files in a batch before we build
> them. The biggest amount of memory required is not for running the
> compiler on a thread, but for loading files and keeping them in
> memory for the whole duration of the build. With more threads, we
> still don't load each file more than once (twice to build it), so
> there's no reason it should take more memory.
> 
> Or maybe the process of loading and building is inherently single-
> threaded? I don't think so, but maybe?
Loading and building is implemented in build-aux/compile-all.scm, which
does use multiple parallel workers.  However, since all compilation
appears to be done in the same Guile process, I don't think multi-
threading makes too big of an impact (it'll be the same garbage
collector regardless of ordering).

Cheers



Reply via email to