On 26 June 2011 14:59, Stuart Longland <redhat...@gentoo.org> wrote:
> Hi all,
>
> I've been busy for the past month or two, busy updating some of my
> systems.  In particular, the Yeeloong I have, hasn't seen attention in a
> very long time.  Soon as I update one part however, I find some swath of
> packages break because of a soname change, anything Python-related stops
> working because of a move from Python 2.6 to 2.7, or Perl gets updated.
>

I have a system I use/developed which tries to improve consistency,
but it greatly increases the amount of compile/test runs that get done
in the process, but for my cases that's ok because I *want* things to
be rebuilt needlessly "just to make sure they can still be built".

<Massive rant follows>

I make some assumptions:

Take a timestamp.

Record all packages that need to be updated or reinstalled due to USE changes.

Assume, that there is a chance that any direct dependent of those
packages may become broken by this update, either at install time, or
during runtime ( ie: .so breakages, ABI breakages, $LANGUAGE breakages
etc )

Consider that all these packages were installed before that timestamp.

You can then mostly assume all packages installed after that timestamp
are built with the consideration of the changes that have occurred in
their dependencies.

For the most part, this principle appears to cover a very large range
of scenarios, and is somewhat like a proactive "revdep-rebuild" that
assumes that every new install/upgrade breaks everything that uses it.

To this end, I have conjured a very Naïve toolset which does what I want it to.

https://github.com/kentfredric/treebuilder

Note a lot of the paths are hardcoded in the source, and its only
really on Github for educational value.

Workflow proceeds as follows:  ( nb: I /do/ use paludis, so I'll be
using its terms for the sake of accuracy )

./mk_timestamp.sh  # this sets our timestamp values.

echo >| rebuild.txt   # start over the list of things that need rebuilding

Then I sync portage and overlays. ( I have a custom script that calls
cave sync as well as doing a few other tasks )

{

then I do a "deep update of system including new uses"  but not execute it.

cave resolve -c system

And I record each updated/reinstalled package line-by-line in rebuild.txt

Then perform it:

cave resolve -c system --continue-on-failure if-independent -x

}

and then repeat that with 'world'.

Then I run ./sync.sh , which works out all the "Still old" packages,
computes some simple regexps' and emits 'rebuilds.out', which is a
list of packages that "Might be broken"  which I'll reinstall "just in
case".

VERY Often this is completely needless, ie: -r bumps to kdelibs
triggers me rebuilding everything in KDE, -r bumps to perl causes me
rebuilding every perl module in existence, and everything that uses
perl ( including everything in KDE incidentally , as well as GHC and a
few other very large nasties ).

Once this list is complete, there are 2 approaches I generally take,

1. If the list is small enough, I'll pass the whole thing to cave/paludis.

  cave resolve -c -1 $( cat /root/rebuilder/rebuilds.out )

   and note record any significant changes ( ie: new uses/updates of
dependendents for things that are "orphans" for whatever reason )

   and then

  cave resolve -c -1 $( cat /root/rebuilder/rebuilds.out ) -x
--continue-on-failure if-independent

or

2. If this list looks a bit large, I'll pass the things to reinstall
randomly in groups

   dd if=/dev/urandom of=/tmp/rand count=1024 bs=1024  # generate
random file for 'shuf' to produce random but repeatable sort.

  cave resolve -c -1 $( shuf --random-source=/tmp/rand -n 200
/root/rebuilder/rebuilds.out )

   again, noting updates/etc

   cave resolve -c -1 $( shuf --random-source=/tmp/rand -n 200
/root/rebuilder/rebuilds.out )  -x --continue-on-failure
if-independent


After each build run, sync.sh is re-run, updating the list of things
which this code still considers "broken", and then I continue the
build/build/build/sync pattern until the build contains all items in
rebuilds.out, and they are all either failing or skipped.

At this point, all the files still listed in rebuilds.out are deemed
"somewhat broken".  This list is then concatentated with "brokens.out"
and the process is repeated until all results fail/skip.

then brokens.out and rebuilds.out are concatentated together and
replace broken.txt , which is a list of "things to check later".

at this point I consider that "merging things are as good as its going
to get", and the entire process starts over, update the timestamp,
sync portage.

Over time, broken.txt adapts itself, growing and shrinking as new
things start being broken due to dependencies, or being resolved due
to fixes entering portage.

Long story short, all of the above is mostly insanity, but its
reasonably successfull. And after all, I am an insane kinda guy =)

--
Kent

perl -e  "print substr( \"edrgmaM  SPA NOcomil.ic\\@tfrken\", \$_ * 3,
3 ) for ( 9,8,0,7,1,6,5,4,3,2 );"

http://kent-fredric.fox.geek.nz

Reply via email to