On Sat, Mar 01, 2025 at 16:15:12 +0000, vspefs wrote:
> I read a few mails from the autoconf thread. I'll try to read all now. 
> However,
> a maybe-off-topic-but-could-be-on-topic question: what exactly is the state of
> Autotools now? The whole Autotools build system seems to be on a very slow
> release cycle. They seem to lack enough contributors/maintainers as well.

It was unmaintained for years, but Zack Weinberg took up maintenance in
2020. See these articles:

    https://www.owlfolio.org/development/autoconf-swot/
    https://lwn.net/Articles/834682/

AFAICT, it's in good hands.

> So if Autotools is having release struggles, I would personally prior to
> solutions that require less effort on the build system side.

I think the 9-year gap is "done", but yes, there needs to be substantial
work for automake to support C++ modules properly. I have no idea who
might be interested in funding that (Red Hat for GCC dogfooding of C++
modules perhaps?) nevermind doing the necessary work.

> Also, I forgot to "reply all" on the last mail. Here's the mail that answers
> some questions from NightStrike:
> 
> > GCC conjures up both .o file and .gcm file in one invocation when possible,
> > too. And yes, that can be managed well with grouped target - but a rule with
> > grouped target must have a recipe, which I think is a little beyond `gcc 
> > -M`'s
> > scope.

It's not a significant problem on the build graph side. I only
implemented single-command CMI-and-object rules so far because:

- it's what Fortran does today and is already supported
- is supported by the three major C++ compilers for C++

Clang does support separate `.cc -> .pcm` and `.pcm -> .o` commands,
but, IIRC, this is not actually equivalent to `.cc -> {.pcm, .o}` in
that the codegen may differ subtle ways that may matter. I'm interested
in correctness before we start really squeezing performance out of the
setup, so I'd like to avoid adding more bumpy roads along the way before
we have a smoothed out path to compare it against if/when bugs crop up.

Known performance bits I know of (and they may be worse in specific
circumstances; need actual numbers to know):

- batch scanning: scan a set of sources in a single shot (supported by
  P1689 already)
  pro: fewer scanning commands
  con: scan many files if any change
- combine scanning and collation: a batch scanner can also do the
  collation work
  pro: fewer commands
  con: collation requires build system and build tool knowledge;
       probably hard to truly abstract it out into a combined tool given
       that scanning is so toolchain-dependent
- "two phase" module generation: fissioning codegen into `.cc -> .cmi`
  and `.cmi -> .o` steps
  pro: better parallelism unlocked (importers can run before codegen of
       what they import
  con: harder to implement on the toolchain side? without build system
       knowledge of "makes a CMI", requires dynamic rule generation
       during the build

--Ben

Reply via email to