<snip>

> Would it be possible that when I do an hg pull of mozilla-central or
> mozilla-inbound, I can also choose to download the object files from the
> most recent ancestor that had an automation build? (It could be a separate
> command, or ./mach pull.) They would go into a local ccache (or probably
> sccache?) directory. The files would need to be atomically updated with
> respect to my own builds, so I could race my build against the download.
> And preferably the download would go roughly in the reverse order as my own
> build, so they would meet in the middle at some point, after which only the
> modified files would need to be compiled. It might require splitting debug
> info out of the object files for this to be practical, where the debug info
> could be downloaded asynchronously in the background after the main build
> is complete.
>

Just FYI, in Austin (December 2017, for the archives) the build peers
discussed something like this.  The idea would be to figure out how to
slurp (some part of) an object directory produced in automation, in order
to get cache hits locally.  We really don't have a sense for how much of an
improvement this might be in practice, and it's a non-trivial effort to
investigate enough to find out.  (I wanted to work on it but it doesn't fit
my current hats.)

My personal concern is that our current build system doesn't have a single
place that can encode policy about our build.  That is, there's nothing to
control the caching layers and to schedule jobs intelligently (i.e., push
Rust and SpiderMonkey forward, and work harder to get them from a remote
cache).  That could be a distributed job server, but it doesn't have to be:
it just needs to be able to control our build process.  None of the current
build infrastructure (sccache, the recursive make build backend, the
in-progress Tup build backend) is a good home for those kind of policy
choices.  So I'm concerned that we'd find that an object directory caching
strategy is a good idea... and then have a chasm when it comes to
implementing it and fine-tuning it.  (The chasm from artifact builds to a
compile environment build is a huge pain point, and we don't want to
replicate that.)

Or, a different idea: have Rust "artifact builds", where I can download
> prebuilt Rust bits when I'm only recompiling C++ code. (Tricky, I know,
> when we have code generation that communicates between Rust and C++.) This
> isn't fundamentally different from the previous idea, or distributed
> compilation in general, if you start to take the exact interdependencies
> into account.


In theory, caching Rust crate artifacts is easier than caching C++ object
files.  (At least, so I'm told.)  In practice, nobody has tried to push
through the issues we might see in the wild.  I'd love to see investigation
into this area, since it seems likely to be fruitful on a short time
scale.  In a different direction, I am aware of some work (cited in this
thread?) towards an icecream-like job server for distributed Rust
compilation.  Doesn't hit the artifact build style caching, but related.

Best,
Nick
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to