On 07/24/2017 02:13 PM, Joshua Cranmer 🐧 wrote:
On 7/24/2017 3:25 PM, Enrico Weigelt, metux IT consult wrote:
Not sure, whether an 4core i7 w/ 8GB RAM already counts as "old", but
it's really slow on my box. I've got the impression there's stil a lot
of room for optimizations. For example, I wonder why lots of .cpp-files
are #include'd into one.
In that example, undoing that slows down your build. (Parsing C++
header files take a lot of time in the aggregate, and you spend less
time linking when there's no duplicates of inlined methods).
Sorry, I know this is a tangent, but I'd like to underscore that final
clause, since it's been so surprising to me.
Unified builds *massively* speed up the final link. I dropped the
unified chunk size (number of .cpp files glued together at a time in
unholy matrimony) for js/src because of a very common use case, which is
to touch one file and rebuild. With the default of 16 source files glued
together, that means recompiling a huge source file. Compiles are
naturally much faster with the new value (6).
Surprisingly, though, the total time to touch a single source file and
get out a working build was pretty much unchanged -- the time saved in
the compilation was spent linking more object files instead.
Overall, it's still a good change, since when you're still working
through compile errors on something, you aren't going to be linking
anyway. It just surprised me that the link could get that much slower.
More recently, I accidentally started using a nonunified build, and
honestly thought the linker was hanging because it took so much longer
than I'm used to (this was with n=1 instead of n=6 or n=16).
My guess is that the compilers/linkers don't really handle heavy C++
template usage very well, and end up generating and then eliminating
massive number of duplicate template instantiations. But that's idle
speculation.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform