Bruce Dubbs wrote:
> We might also 
> consider using the environment variable CONFIG_SITE to cache configure 
> settings.  E.g.
> 
>       export CONFIG_SITE=/home/lfs/config.site
> 
>       # /home/lfs/config.site for configure
> 
>       # Give Autoconf 2.x generated configure scripts a shared default
>       # cache file for feature test results, architecture-specific.
>       if test "$cache_file" = /dev/null; then
>         cache_file="$prefix/var/config.cache"

Actually, I'm not sure how safe this is.  See, for example, 
http://www.gnu.org/software/autoconf/manual/autoconf.html#Cache-Files:

"The site initialization script can specify a site-wide cache file to 
use, instead of the usual per-program cache. In this case, the cache 
file gradually accumulates information whenever someone runs a new 
configure script. (Running configure merges the new cache results with 
the existing cache file.) This may cause problems, however, if the 
system configuration (e.g., the installed libraries or compilers) 
changes and the stale cache file is not deleted."

As we're obviously continually building more libraries throughout LFS, 
I'd think there is a good chance that the cache file would be stale more 
often than not.

However, if our build order is correct, then one would hope that the 
first package that checks to see if a particular library, binary or 
other feature is available will be configured after that dependency has 
been met, and therefore the cache file will be accurate.  The only place 
this will fall over is when there are optional dependencies that we do 
not fulfil for whatever reason.  At that point, we need to inform users 
of how to invalidate either that specific cached result, or remove the 
entire cache file.  This is obviously particularly important when going 
into BLFS which has many more such dependencies, and cyclical deps too.

Regards,

Matt.
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to