On Thu, 6 Apr 2017 10:45:23 -0500 David Wright <deb...@lionunicorn.co.uk> wrote:
> On Thu 06 Apr 2017 at 11:50:56 (+0100), Joe wrote: > > > > Granted, there have been more little niggles with each upgrade (this > > machine started life as sarge), things get more complicated with > > each version. > > I thought lenny→squeeze was the most complicated, because lenny's > standard kernel was not compatible with the upgrade process and > had to be upgraded in a preliminary step. That could then lead to > knock-on effects with non-free firmware. And, for safety, udev > had to be immediately upgraded because of the new kernel, then > the system rebooted to bring them into operation before the > upgrade. I don't remember that, though I must have gone through it. I wouldn't dare try skipping a version. The only serious problem I had was when exim4 jumped a version, and the new one didn't accept debconf directives, and I hadn't noticed. Upgrading with the old configuration file being kept turned out to be a big no-no, it got into a state where even dpkg wouldn't uninstall the broken bits, and I had to resort to deleting files manually. > > > I'm not that bothered about downtime (within reason, the > > Debian lists get very stroppy when their emails bounce) but some > > people are. > > A few minutes later you posted: > > > If I was a paid admin looking after multiple servers, yes, that's > > the obvious thing to do. But this isn't my job, and I can't afford > > to buy a second set of hardware, so the only practical test is to > > actually do it. > > How about getting those freeloading critics to fork out for > a new drive so that you can build and test a second system > (dual-bootable) during your scheduled downtimes. > My what? It's a home server/firewall/mail server. There is no scheduled downtime. I migrated to a new hard drive a few months ago, and that gave me some unscheduled downtime until I discovered what the BIOS was doing with drive naming... it was one of those 'no, this *cannot* be happening' moments where I copied /etc/fstab between the wrong pair of drives, thereby breaking both old and new installations. It still seems to be unreasonably difficult to use a working installation to install the correct grub information to another drive which is intended to become the new working installation, still a matter of messing around with chroot and a sequence of mounts and unmounts. -- Joe