On 1/16/18 2:59 PM, smaug wrote:
On 01/16/2018 11:41 PM, Mike Hommey wrote:
On Tue, Jan 16, 2018 at 10:02:12AM -0800, Ralph Giles wrote:
On Tue, Jan 16, 2018 at 7:51 AM, Jean-Yves Avenard
<jyaven...@mozilla.com>
wrote:
But I would be interested in knowing how long that same Lenovo P710
takes
to compile *today*….
On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux
debug -Og build with gcc: 12:34
debug -Og build with clang: 12:55
opt build with clang: 11:51
Interestingly, I can almost no longer get any benefits when using
icecream,
with 36 cores it saves 11s, with 52 cores it saves 50s only…
Are you staturating all 52 cores during the buidls? Most of the
increase in
build time is new Rust code, and icecream doesn't distribute Rust.
So in
addition to some long compile times for final crates limiting the
minimum
build time, icecream doesn't help much in the run-up either. This is
why
I'm excited about the distributed build feature we're adding to
sccache.
Distributed compilation of rust won't help unfortunately. That won't
solve the fact that the long pole of rust compilation is a series of
multiple long single-threaded processes that can't happen in parallel
because each of them depends on the output of the previous one.
Mike
Distributed compilation won't also help those remotees who may not
have machines to setup
icecream or distributed sscache.
(I just got a new laptop because of rust compilation being so slow. )
I'm hoping rust compiler gets some heavy optimizations itself.
I'm in the same situation, which reminds me of something I wrote long
ago, shortly after joining Mozilla:
https://wiki.mozilla.org/Sfink/Thought_Experiment_-_One_Minute_Builds
(no need to read it, it's ancient history now. It's kind of a fun read
IMO, though you have to remember that it long predates mozilla-inbound,
autoland, linux64, and sccache, and was in the dawn of the Era of
Sheriffing so build breakages were more frequent and more damaging.) But
in there, I speculated about ways to get other machines' built object
files into a local ccache. So here's my latest handwaving:
Would it be possible that when I do an hg pull of mozilla-central or
mozilla-inbound, I can also choose to download the object files from the
most recent ancestor that had an automation build? (It could be a
separate command, or ./mach pull.) They would go into a local ccache (or
probably sccache?) directory. The files would need to be atomically
updated with respect to my own builds, so I could race my build against
the download. And preferably the download would go roughly in the
reverse order as my own build, so they would meet in the middle at some
point, after which only the modified files would need to be compiled. It
might require splitting debug info out of the object files for this to
be practical, where the debug info could be downloaded asynchronously in
the background after the main build is complete.
Or, a different idea: have Rust "artifact builds", where I can download
prebuilt Rust bits when I'm only recompiling C++ code. (Tricky, I know,
when we have code generation that communicates between Rust and C++.)
This isn't fundamentally different from the previous idea, or
distributed compilation in general, if you start to take the exact
interdependencies into account.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform