> Even then, many vendor compilers and linkers have many
> non-conformances, and outright bugs.  Take a look at the
> number of workarounds that make their way into gnulib to
> cover breakage in the POSIX APIs alone.
> 
> You can either try to remember what all of those are, or
> use something like autoconf to probe for known bugs, and
> gnulib to plug them, or you can link against a shim library
> like GNU libposix which will do all of that automatically
> when it is built and installed, allowing you to write to the
> POSIX APIs with impunity.

The autoconf ecosystem represents a hypothesis.  I think we
have gathered enough data to seriously evaluate the truth of
the hypothesis, and I don't think it's worked out very well.

Before auto*, the "old way" was for each package to separate out
platform-specific code into a module per platform, e.g., sunos.c.
That meant that each package had to have an expert for each
platform, somebody who was familiar with that package and that
platform and who knew C.  Each time a platform revved there
would be a delay while the platform expert for each package
figured out what to do (generally throw in an ifdef and write
a couple lines of code).

In the Brave New World of GNU auto*, in theory all packages can
share all of the platform-specific tweaks, and in theory the tweaks
aren't specific to platforms anyway, but to features.  In practice,
however, when a platform revs, all of the tweak-detection code
breaks, which means that a 5,000 line shell script goes "configure.sh:
4232: syntax error", and the situation can be fixed only by somebody
who is an expert on:
* that platform
* the package
* C
* M4
* bash
* autoconf
* autoheader
* libtool
* gawk (the gawk scripts say #!/usr/bin/awk at the top, but woe
  betide anybody who attempts to run them without "awk" being
  gawk)

So there is a delay until one of the very few people on the planet
conversant with all of those things figures out what to do.  The
feature tests are brittle (actual example: we decide whether we have
MIT Kerberos or Heimdal Kerberos by seeing whether libkrb5 contains
some oddly-named extension function; a year later, the other group
implements that function and kablooie, no package knows whether to
-lkrbsupplemental or -lkrbadditional).

In both the "old way" or the "new way", every time a platform revs
most complex packages fail to build.  In the first scheme, it is
frequently the case that anybody competent to build a package from
source can lash together a fix which works for their situation until
an official fix comes out--and there's a good chance that the simple
fix is actually the right fix for all users of that package on that
platform.  The second scheme is based on the hypothesis that one
many-skilled person on one platform can tweak an immensely complex
ecosystem so that it will run on many platforms that person has no
access to.  I think that hypothesis has turned out to be false.
Packages are still buildable on exactly those platforms where an
expert has done work specific to that package and that platform,
only now it is much harder to diagnose and fix build problems.

Essentially, the underlying assumption was that an N*M problem
could be collapsed down to an N+M problem; sadly, the complexity
of the result is more like 2**(N+M).

Dave Eckhardt

P.S. I also think we have enough data to reject the hypothesis
that 5,000-line shell scripts are a good idea.  Both hypotheses
had their attractiveness at inception, but the point of running
experiments is (hopefully) to learn from them.

P.P.S. I am leaving out conundrums like "the feature tests that
auto* version x.y uses are not compatible with the feature
tests used by auto* version y.x, so you can't switch a package
from auto* x.y to y.x, but auto* x.y predates the existence of
the platform I'm trying to build on, so it does *everything*
wrong on that platform".

Reply via email to