Martin D Kealey <mar...@kurahaupo.gen.nz> writes:

>> Martin D Kealey <mar...@kurahaupo.gen.nz> writes:
>> > it's SLOW.
>
>
> Any comments on this point?
>
> It seems like the main causes of inadequate speed are:
> (1) Lack of parallelism.
> (2) A monolithic probe-test-result cache structure, that's either "all
> valid" or "all discarded".

Sure, the issue you have is with the speed of the ./configure run right?
That is, not something like autoreconf, automake, etc.

The ./configure script is just a shell script. Other tools like cmake
and meson are faster when performing similar checks, because, for
example, they can simply modify strings instead of calling sed (requires
pipe, fork, and exec each time).

The benefit of autoconf/automake is that a user downloading a tarball
receives the ./configure script and Makefile.in. They do not need
autoconf/automake installed to build the program. That is not the case
for cmake and meson, as far as I recall. One will need cmake/meson
installed to build a source tarball.

> My core issue is why do all the compiler and OS probes need to be done by
> every project? It's not like those things change on a daily basis - unlike
> deployment options like $exec_prefix, which can (and in my case DO) change
> on every build.
>
> There's still an important place for autoconf, but I think it could be
> improved by (a) separating it into distinct phases, and (b) separately
> caching the result of each probe, indexed by relevant identifiers (OS,
> compiler, libc), so that they could potentially be distributed as an
> accompaniment for each *compiler*.

One would need a cache for each combination of distribution, kernel
version, and compiler. Updating only one of those would invalidate the
cache. I think that would get unmaintainable.

You can use cache files on a project specific basis [1].

Collin

[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.72/autoconf.html#Cache-Files

Reply via email to