On 9/4/14, 12:26 AM, Nicholas Nethercote wrote:
But in lieu of sub-file
dependency tracking you'll take manual overrides that emulate them by
doing partial builds, relying on your knowledge of the codebase to
know that those partial builds are safe.
This is a point worth clarifying.
I'm not trying to produce a build when I want these manual overrides.
I'm trying to check that some small set of files that actually uses a
new function I added compiles correctly before I spend the time to do a
more complete compile that actually produces a build.
I mean, if I add a new virtual function to nsINode and then only compile
the subset of files that call the new function, I _know_ the resulting
build if I linked libxul is busted: different parts of it think the
vtable looks different. But this is still a useful thing to be able to
do as I iterate on my API addition!
I realize that the main goal of the build system is producing an actual
working build. But while doing development this may not be a goal at
intermediate stages...
That seems like a difficult requirement to satisfy
That, I agree on. Build systems are very geared toward "create the
build that has everything updated", not "test these changes compile in a
small part of this large interwingled codebase"....
-Boris
P.S. "large interwingled codebase" is key; if my habitual full build
were the SpiderMonkey shell, say, I would care a lot less about partial
builds. Though even then, if I change MIR.h having a way to just
recompile CodeGenerator.cpp and IonBuilder.cpp and make sure those
compile before I compile all the other js/src/jit stuff sure would be
nice...
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform