A college of mine that follows this noticed this e-mail:

http://linuxfromscratch.org/pipermail/lfs-dev/2008-March/061064.html

I just subscribed to the list, but I didn't get the e-mail so I can't really 
reply properly to it.  My college was curious if there we could use any of that 
automation, or if we could contribute back some of our automation efforts (if 
only so we don't have to keep modifying LFS as we upgrade, it be great if we 
could use the auto-extracted LFS scripts).

At my employer, I've spent a great deal of time doing things with LFS that are 
related to the above e-mail that might be of interest (all of this is LFS 6.2 
based work, but it'd be straighforward to upgrade from what I've seen of LFS 
6.3):

1. Modified all of the LFS commands so that /tools can be moved to an arbitrary 
place (it still has to be same the same path inside and outside of the chroot, 
but the path you choose does *NOT* have to be /tools).  I did this for several 
reasons, the primary one was that we have an automated build and I wanted to be 
able to run two of them at once without having to have multiple developers 
communicate.  I ended up putting them in /home/${LFS_USER}/tools, where each 
builder has a unique LFS_USER name.  The one cavet, is that you have to put the 
path into a regex to modify some of the patches that are applied to the gcc 
toolchain, so the path has to be "regex safe", but I don't think that is too 
much of a restriction.  As LFS useradd doesn't let you use most special chars, 
it was pretty easy to be sure that wasn't a problem.

2. I've got an 8 CPU machine, so it was well worth it to track down this hint 
and get it up to date:
http://www.linuxfromscratch.org/hints/downloads/files/parallelcompiling.txt

This is a *HUGE* time win.  As multi-CPU machines become more common, I think 
folks who do lots of LFS builds would appreciate this being integrated into 
LFS.  Our 4-5 hours build pre-parallelization in ~1.5 hours in parallel.  It'll 
compute the number of CPU's from /proc if the level of parallelization isn't 
set, and use double the number of CPU's (seemed to be the sweet spot).  It also 
turns the build into an I/O bound problem, rather then the CPU problem it is 
with one CPU.

Nothing terrible tricky in LFS other then bash compiles 9 times out of 10 in 
parallel fine, but fails the rest.  There's a patch to fix this floating out 
there, but it's not yet in the version of bash in LFS 6.2 (not sure about 6.3).

3. Modifing so that all binaries that can be built outside of the source tree 
will be.  Not sure this is of any use, but I always prefer to do this.

4. Tracking down virtually every package's way to modify the root to install 
into.  95% of the Autoconf packages you use DESTDIR, but I tracked down almost 
every other packages way of doing this (The RedHat RPM spec files had the 
answers that were too hard to figure out from the Makefiles or build 
documentation, binutils is really weird as I recall).  This was primarly to 
allow me to take a LFS 6.0 bootstrap to build an LFS 6.2 development chroot (we 
have several projects based on different versions of LFS, each in their own 
chroots rather then building dedicated machines).  The only place to be careful 
is if the package detects things from a running kernel, you have to run a 
kernel with the right features.  As LFS 6.2 is also the target for the embedded 
system I work on, I just use the binaries from the development chroot and 
install them doing something like make DESTDIR="/final_image/" install.  It was 
easier to install into a target root only the packages I needed, then t
 o duplicate the dev chroot and remove the development tools.

5. How to use mke2fs, dd, sfdisk, and grub to generate a bootable disk images 
that includes multiple partitions, so you don't need a spare disk or partitions 
(you could just as easily make network bootable images, it's on the list of fun 
projects that I'll never have time to do, it'd be a boon for autoamted testing 
to just boot off the network).  This allows us to generate bootable images that 
can be dd'ed onto flash drives for our embedded systems.

6. Split the build and install of packages into separate functions.  So 
build_bash builds bash, install_bash installs it (each install function 
respects the variable TARGET_ROOT for doing an install to a non-/ location, 
pretty that's item #4 from the list), I omitted all of the "check" 
functionality, but it'd be easy to add back in piecemeal.  Then the build and 
logging is done via a for loop that just runs each function.  Using the shell 
function "time", and redirection we capture all of the logs and timing 
information for debugging and looking for the biggest time sinks in the build 
for ways to speed up the build.  This is great for finishing up a partial build 
that failed while debugging (just comment out the parts of the loops that 
worked).

This is really helpful, we maintain several sets of functions that we share 
across different embedded systems.  If we need a custom build argument for a 
package on one system, we duplicate the standard function and customize it with 
a special name.  Our build scripts just include the stock helper functions, and 
then the for loop that lists the actual things to build can vary.  So we can 
build very different systems while not modifying too much.  It's really nice 
for BLFS packages, as those vary wildly between our different systems.  It also 
means assembling our final embedded target, just set TARGET_ROOT and then run 
the loop with the installs, and not the builds.  It'd be wonderful of the 
various LFS books kicked out uniform functions or Makefiles, so you'd just 
assemble a bit of config that called all the right pieces in the right order.  
Rather then generating a huge script that was all inline.

Proper package management looked to be a topic of interest on the list, and 
that'd could be used in lieu of this.  We didn't want to roll our own package 
management, so this was our compromise.  I've tried to think about how to make 
the scripts easy to add hooks to, so that you could extend the build at 
specific points, but just haven't gotten around to doing that yet.  I'm less 
interested in LFS as a learning vehicle (now that I've learned so much from 
it), and more interested in using it to automate the building custom embedded 
systems where everything is build from source.

That's pretty much all of the significant improvements I've done for automating 
LFS w/ respect to generating a bootable image quickly and efficiently.

I've automated the build soup to nuts (you type makeWorld.sh, wait 1.5 hours, 
dd the image to a compact flash card, plug it into the Flash to IDE adapter and 
it boots), but it only works on an LFS machine.  I can't get it to run on a 
non-LFS distro, so I'm highly interested to see if and how Jay accomplishes 
that.  The one problem I can't figure out is if you follow the instructions 
here:

http://www.linuxfromscratch.org/lfs/view/6.2/chapter04/settingenvironment.html

If you su - lfs -c "./buildBootstrapTools.sh" on non-LFS machines the command 
runs in the background, but gives an interactive terminal, so it'll never exit 
(or I didn't wait long enough).  It works just fine on an LFS machine, but when 
I try it on a CentOS 5.0 or an Ubuntu (recent, not sure the version), it always 
gives me a console so I can't run it unattended.  If I can solve this problem, 
it'd run fine on a CentOS machine (I think).  I had a hack that involved 
redirection and outputing a wrapper script, and invoking the exec in a wrapper 
script for buildBootstrap.sh rather then inside of the .bash_profile.  It 
worked but was too hideous to maintain.  It's some weird interaction between 
su, bash login code and exec.  I never did figure it out, it wasn't mission 
critical so I gave up.  It works just fine if I do "su - lfs" interactively and 
then type the commands on the various distros, just  like it does on LFS.

I'm not a particularly clever person, and knew nothing about LFS 4 months ago 
(all this took about a man-month to do), so the suggestions that these can be 
done is probably the most valuable thing I have to offer.  I'm sure the LFS 
experts can re-do my work quickly.  If any one of these is concepts integrated 
into LFS it'll save me huge amounts of time if we upgrade to a newer version of 
LFS.  However, if there is interest, I can talk with my boss about contributing 
back the scripts and automation I've done.  I'd really like to, but it seems a 
waste of time and effort if they aren't of any use or interest to the LFS 
folks.  I'm just not sure if there is any interest, or what the best way to go 
about making such a contribution.  I'm fairly comfortable with Docbook, so it'd 
be fairly easy to modify the book and send in the patches, or to just send in 
snippets of the commands.  If anybody has any questions, or comments, I'd be 
highly interested.

If any of this is of interest, I'd happily discuss it with folks in on my own 
time.  If I get permission, I'd just submit back patches for the the exact 
scripts and automation I've done.  I figure that LFS has been hugely valuable 
to my employeer, and would very much like to contribute back to LFS in any way 
I can (my boss would too, there's legal and muckty mucks in charge to get 
approval from).  Thanks to the folks who assembled the LFS books, and LFS 
LiveCD's.  They've been invaluable to me.

Kirby


_________________________________________________________________
Shed those extra pounds with MSN and The Biggest Loser!
http://biggestloser.msn.com/
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to