On 2015-01-16 2:10 AM, Steve Fink wrote:
On 01/15/2015 04:52 PM, Ehsan Akhgari wrote:
On 2015-01-15 7:30 PM, Steve Fink wrote:
On 01/15/2015 11:21 AM, Ehsan Akhgari wrote:
On 2015-01-14 8:37 PM, Steve Fink wrote:
On 01/14/2015 11:26 AM, Ehsan Akhgari wrote:
   From now on, the only supported build mode is unified
compilation.  I
am planning to follow-up with removing support for the
--disable-unified-compilation configure option altogether in bug
1121000.

I commented in the bug, but I guess this is probably a better forum.

Why is the configure option being removed?

Because unsupported build configurations that are *guaranteed* to not
be maintained are not worth keeping around.  I am 90% sure that
--disable-unified-compilation would have been broken since yesterday.
:-)

But that's not the point. I totally agree that non-unified builds will
get broken immediately and will stay broken most of the time. That's not
in question. The point is how hard it is for someone who comes along and
wants to fix our codebase to be valid C++, even if that fix is not
permanent.

Specifically, what I imagine happening is this: we start out nowish with
nonunified builds working. Within a day, they're busted. After some
time, somebody who gains from them being correct wants to fix them, and
uses --disable-unified-builds to do so. Rinse, repeat. The amount of
breakage to fix each time is determined by the time between fixes.

If this hypothetical person has to go through every moz.build and change
UNIFIED_SOURCES to SOURCES, or FILES_PER_UNIFIED_FILE to 1, then they're
less likely to do it, and the interval between fixes will grow, and
eventually the code base will be rotten enough that nobody will want to
deal with it. Oh, except that people will occasionally add new files and
uncover breakage that wasn't their fault because they shift around the
boundaries. Which isn't awful, because you'll only have to fix a
localized area, but it is guaranteed to baffle many of the people who
hit it until they've been educated. More overhead to contributing, more
"specialness" in our codebase.

If you were saying that the --disable-unified-builds mechanism itself is
likely to break, then I couldn't really argue with removing it. Is there
a simpler way to accomplish the same thing? A
FORCE_MAX_FILES_PER_UNIFIED_FILE_EVERYWHERE=1 setting in the toplevel
moz.build or something? It doesn't need to be pretty, but it should be
easier than

    for f in **/moz.build; do echo "FILES_PER_UNIFIED_FILE=1" >> $f; done

(assuming that even works, and note that you'll need zsh for the '**'
thing).

There are many ways to force UNIFIED_SOURCES to be built in
non-unified mode.  Reverting my patch is one of them.  Applying the
following patch is another:

diff --git a/python/mozbuild/mozbuild/backend/recursivemake.py
b/python/mozbuild/mozbuild/backend/recursivemake.py
index 608f6f5..de754b6 100644
--- a/python/mozbuild/mozbuild/backend/recursivemake.py
+++ b/python/mozbuild/mozbuild/backend/recursivemake.py
@@ -394,7 +394,7 @@ class RecursiveMakeBackend(CommonBackend):
              non_unified_var = var[len('UNIFIED_'):]

              files_per_unification = obj.files_per_unified_file
-            do_unify = files_per_unification > 1
+            do_unify = False
              # Sorted so output is consistent and we don't bump mtimes.
              source_files = list(sorted(obj.files))

This is a little cryptic, but something similar to this does seem
preferable to keeping all of the code for --disable-unified-compilation.
It just needs to be discoverable by the people who would usefully
discover it.

Can you think of something better/more obvious?

Part of what you're complaining about above is not about removing
support for the configure option, it's about dropping support for
non-unified builds on infrastructure.

No, because I agree with that decision.

OK, that's good to know.  :-)

   Yes, there are definitely downsides to what we decided to do.  Our
code is no longer "valid C++" (which honestly is a very arbitrary line
-- our code has never been valid C++ from a puristic standpoint any
way because of all of the compiler specific extensions that we use).

I don't think that's really fair. I don't give a rip about sticking to
only c++0x or whatever. Our code conforms to an even stricter standard,
namely the common subset supported by a collection of different compilers.

What I'm concerned about is code that doesn't compile on any compiler if
you do it the "normal" way, one file at a time. Code that uses symbols
that are totally unknown or not in any using'd namespace. Code referring
to mysterious ALL_CAPS_SYMBOLS (that happen to be defined as macros in
headers that are not #included.) Code that depends on template
specializations from a non-included file for correctness. Unified
compilation creates dummy .cpp files that #include <dogs> and <cats> and
force them to live together.

Here is the issue that I have with what you're saying. It has never been the case that you can take one of our cpp files, and just pass it to the compiler and get an object file out of it. We use disgusting hacks to mask the path to #included files (I'm looking at LOCAL_INCLUDES/GENERATED_INCLUDES) instead of doing #include "path/to/file.h". We require you to pass things such as -DMOZILLA_INTERNAL_API on the command line for almost all of the code in our tree (but not all of it!). We require you to specify -include ../../../mozilla-config.h and whatnot.

So, I agree with your assertion that this makes it harder to take a .cpp file, and compile it on its own. But doing that without the help of the build system has never been possible. And once you start to beg the build system for the exact command line, you can benefit from things such as the above patch. Why is this not acceptable to you?

It's bad for IDEs,

Ben Turner pointed out to me on IRC yesterday that Visual Studio will have a hard time because of this, and he's right, but that's because the Visual Studio project generator is broken. It should add the UnifiedXXX.cpp files to the project, not FileIncludedInUnified.cpp. So that's solvable.

> it's bad for people reading the code ("what is this
Value thing? Oh... some other .cpp file must be using JS::Value..."),
it's a hazard when adding/removing files, and it's a problem when you
want to dig into one source file in particular perhaps by generating a
.i for it.

Yes, these problems do exist.

And yes, there will be the potential for breakage when you add or
remove unified sources.  And that breakage would not be your fault,
and you would need to be educated about what's going on, unless you
have experience with another project that uses the same idea (which is
unlikely.)

But there are upsides too.  You won't get backed out because you broke
a build configuration that we do not ship.  We don't pay the monetary
and human resource cost of keeping it working.  And you will get
*fast* builds.  I'd like to remind folks that unified builds improved
our build times by more than 2x.

Er, I was trying to keep that separate too, since I happen to agree with
that decision. Well, with some misgivings, but all in all I prefer
turning off nonunified builds in automation. (I'd kinda like to keep
building one across all platforms daily or weekly or something, treating
it as non-tier-1, but I can believe that it might end up looked at by
exactly zero people.)

So it is a trade-off.  It's completely fine if you disagree on the
call that we made there, but I'd like to keep that conversation
separate with the one about how to build the tree locally in
non-unified mode.

As would I.

For the latter conversation, there are multiple ways.  I don't think
we should accept such patches because 1) we already have too many
configure/etc options that break your build, and we keep problem
reports from people who have options in their mozconfigs that are
unsupported and cause broken builds (Please see the archives of
<https://groups.google.com/forum/#!forum/mozilla.dev.builds>.) and 2)
because I have yet to see any evidence to suggest that there will be
people who will maintain that build configuration to work.  However, I
very well realize that both of these points are my opinion.  This is
ultimately gps/glandium's call, and if they disagree with me, I won't
fight more on this hill.

I am also uncertain of #2.

However, I very strongly believe that the trade-offs for whether we
should do these builds *on the infrastructure* tip very heavily
towards not doing so.  Not sure whose call that will ultimately be,
but I would be extremely sad to impose the cost of backouts because of
breaking a configuration that we don't ship to our developers.

Agreed. My main concern is the added difficulty of contributing to the
code base when there are unified bugs lurking. I agree with you, that
cost is less than the daily cost of nonunified bustage backouts.

Yeah, I acknowledge that if a new contributor adds a .cpp file and suddenly gets build failures in another file, that's crappy and non-obvious. That makes me sad, but I have not been able to come up with a solution that doesn't leave a sour taste in my mouth. :/
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to