On Sat, Apr 21, 2007 at 10:31:21AM +0200, "Peter Valdemar Mørch (vol)" wrote:
> Douglas Allan Tutty dtutty-at-porchlight.ca |volatile-lists| wrote:
> >I use aptitude (this is not a troll, please), and I use it interactivly.
> >I have only those pacakges that I specifically _want_ installed marked
> >as manual with everything else being automatic.
> 
> Aaaaaa! What is *THIS*? "manual" contra "automatic"? This sounds 
> interesting! I just use
> 
Aptitude keeps track of what is installed _by it_ as 
"suggested/recommends" as it installs other packages. People have 
problems with mixing and matching apt and aptitude because they behave 
differently. On occasion, you remove one package manually and 
aptitude removes _all_ "suggests/recommends" that it holds.

Hence the "aptitude deleted all my files"-type posts 
which we see on this list from time to time. You can use "aptitude keep-all" 
occasionally to force aptitude to regard all packages as installed to be 
kept. You can also use "aptitude install -sf"

> # apt-get install bla-bla-bla
> 
> Didn't know apt/dpkg kept track of and internally distinguished between 
> manual and automatic installs. I was always curious about why it didn't 
> distinguish between packages I explicitly ask to have installed and 
> prerequisites for those pacakges and now I find that it does!!!
> 

apt doesn't per se: aptitude does.

> 
> >With stable, you should only get security updates which
> >should not cause package breakage.
> 

Here's a suggestion. Take an older machine or one you can afford to play 
on. Install Debian 4.0 "Etch". Install only the base system.

Then use dselect / apt-get to gradually build up your package base.

Install only the things you need. Take away some of the things you don't 
need - ftp, finger, whois, portmap, nfs would all be on my list to take 
away. Get a good idea for how dependencies work and what's pulled in by 
large meta-packages e.g. x-window-system or kde. That will be useful 
knowledge in any event.

Look at the archives of this list. I regard "stable/testing/unstable" 
not as absolute criteria of stability but rather as the rate of package 
change.  Stable, once released, barely changes. Sarge had six point 
releases in 20 months or so, the last occurring only a few hours before 
the release of Etch - but even if you add up all the changes through 
that period, they probably didn't come to more than a 1/2 DVDs worth
and you could update relatively seamlessly from a machine running 3.1r0 
to 3.1r6 with no obvious problems. Remastering CDs was just a 
convenience to ensure that people picking up install media would have 
the fixes installed or that people could download an updates CD and be 
sure that they'd be up to date.

A case in point: I've a set of machines here, updated daily.

The AMD64 running stable: no changes today. No changes, as far as I can 
see since last week. I don't anticipate any particular change until 
4.0r1 - that will be security fixes and, possibly, any minor fixes that 
didn't make the cut off for Etch and have been found since. If I've got
security.debian.org in my /etc/apt/sources.list then I get the changes 
anyway as they occur.

The AMD64 running testing: 562kb worth of changes today so far: no 
increase in disk space usage.

The Pentium running sid 173MB worth of changes: mostly KDE changing as 
far as I can see.

> Here goes this term "package breakage" again. Do you know what it is and 
> how it arises? Most of the time, dist-upgrade just decides to install a 
> couple of extra packages. But some other times... I just never figured 
> out what makes the difference and what the possible problems and 
> solutions are. I just pray and try my best...
> 

For fun, I've just upgraded this machine from lenny -> sid (testing ->
unstable) using aptitude. Aptitude update ; aptitude upgrade said, 
effectively:

373 packages held back. Not updating ...

Aptitude dist-upgrade said, effectively

I can do the following ... and hold the following packages back

and did the upgrade, as far as it could, updating kde in the process

Another dist-upgrade then suggested removing the current versions of
kde and kdepim and that another 8 packages would be held back (including 
kdebase, kdeartwork, kdeaccessibility, kdemultimedia - all of which are 
kde meta-packages).

What's happening here? KDE is in flux: some individual packages have 
been updated to the latest point version (3.5.6 or so), some haven't. 
Give it a couple of days and the few packages that haven't caught up 
will have done. aptitude dist-upgrade will then pull in the requisite 
KDE meta-packages again (because, of course, these meta-packages depend 
on lots of individual packages to make them up and all the packages 
need to be in synch.).

Once KDE is buildable as a lump, it will probably percolate down into 
testing and testing users will get a huge upgrade at once. If they 
don't, then they'll experience a trickle down over a few days and the 
same sort of "packages being held back" until everything's ready.

Occasionally, there are major system changes which may result in 
packages conflicting because e.g. of incompatible versions of glibc
or a major ABI change. Very major changes may take months to work 
through gradually - developers may, for example, be asked to build their 
packages against the new glibc and force conflicts against versions 
built with the older glibc so that the two don't clash.

I'm assuming that KDE 3.5.x -> KDE 4.0 will be something very like this,
for example.

> >So what happens if you run stable, run aptitude interactively to get
> >everything set up properly, then run update, then select the
> >upgradeable and security upgrades, then tell it to go ahead?
> 
> Dunno. Haven't tried. Don't really like interactive programs for 100s of 
> installations.
> 
> >Or, if all the boxes are identical, what about something like system
> >imager?  Get one updated, create the new image, and propogate it to all
> >the systems?
> 
> The thing is these are production machines that accumulate data. So I 
> don't want to be re-installing them all the time. And I want to keep 
> them up to date... So imaging will get me started, but I'm still left 
> with this problem to solve.
> 

If you don't want to be re-installing them all the time: put them onto 
Etch _AND KEEP THEM THERE_ . They'll still be updated with security 
updates. If they're production machines that you can't afford to lose:
get a stable configuration and leave it. If clients/colleagues complain 
that they absolutely "must have" foo from testing, ask them to justify 
the resultant instability. 

Speaking personally: I've taken one AMD64 server from "unstable"
to "testing" and tracked until "testing" became stable as Etch was 
released. Now I'm leaving it there. It was initially business critical 
that we had what were then "up to date/bleeding edge" apps. but users 
kept complaining about kernel upgrades and application changes and a 
"can't you wait until it becomes stable and we don't have to have any 
downtime". I tried to do upgrades once a month or more to track progress 
through testing and catch up with changes: I did them more often if I 
became aware of security issues. I wouldn't have wanted to skip months 
worth of changes and do it in one hit. The machine was stable and usable 
throughout - but the delta in "testing" was too great to track without 
relatively frequent dist-upgrades.

"Up to date" is a variable: disk space is cheap - one way you might want 
to tackle this is to make multiple Debian mirrors. One you build and 
leave for a fortnight: the other you allow to track daily changes for a 
fortnight . At the end of the first month, you allow clients to sync to the 
now fortnight old data and copy the current data as a snapshot to the 
"two week old" mirror and carry on. This way, you have old, stable data: a 
fortnight newer if you need to catch up or your dependencies aren't 
satisfied and daily bleeding edge. Once a fortnight, you swap over the 
mirror pointers.

> Thanks for your post!!
> 
> Peter
> -- 
> Peter Valdemar Mørch
> http://www.morch.com

Hope this helps,

Andy

Reply via email to