On 8/7/20 3:35 pm, Andrei POPESCU wrote: > On Mi, 08 iul 20, 02:35:09, Andrew McGlashan wrote: >> On 8/7/20 2:11 am, Michael Stone wrote: >>> >>> The short answer is that there simply isn't a good reason to do this >>> on a modern system, and there is no volunteer to donate the enormous >>> amount of effort required to make >>> something work for which there isn't a good justification for >>> expending that effort. There should be no flamewar, if someone wants >>> the situation to change they simply need to be >>> the person who puts in all the work. >> >> Just doing dist-upgrade with a perfectly acceptable file system >> previously is no reason why it should break. > > Debian supports upgrading of most packages between releases. > > It provides no guarantees about hardware, partitioning schemes, > partition sizes, file systems, etc. > > I was under the impression that LVM is used in particular for its > flexibility in adjusting your partitions. What prevents you from merging > '/' and '/usr'?
Yes, that might be the best fix; but I didn't expect it to be necessary. On 8/7/20 9:40 am, David Wright wrote: >> The mentioned intramfs config file has a strange note about it being >> "dangerous" to enable activate all logical volumes, why?!?!?! > A reference to the specific file would help. I see no mention here. Line 35 of /usr/share/initramfs-tools/scripts/local-top/lvm2 (see below) that mentions the risk: Also see the attached email that I sent to the Devuan DNG list for more reference. Below is the file I changed, added line numbered as 63. # cat -n /usr/share/initramfs-tools/scripts/local-top/lvm2 1 #!/bin/sh 2 3 PREREQ="mdadm mdrun multipath" 4 5 prereqs() 6 { 7 echo "$PREREQ" 8 } 9 10 case $1 in 11 # get pre-requisites 12 prereqs) 13 prereqs 14 exit 0 15 ;; 16 esac 17 18 if [ ! -e /sbin/lvm ]; then 19 exit 0 20 fi 21 22 lvchange_activate() { 23 lvm lvchange -aay -y --sysinit --ignoreskippedcluster "$@" 24 } 25 26 activate() { 27 local dev="$1" 28 29 # Make sure that we have a non-empty argument 30 if [ -z "$dev" ]; then 31 return 1 32 fi 33 34 case "$dev" in 35 # Take care of lilo boot arg, risky activating of all vg 36 fe[0-9]*) 37 lvchange_activate 38 exit 0 39 ;; 40 # FIXME: check major 41 /dev/root) 42 lvchange_activate 43 exit 0 44 ;; 45 46 /dev/mapper/*) 47 eval $(dmsetup splitname --nameprefixes --noheadings --rows "${dev#/dev/mapper/}") 48 if [ "$DM_VG_NAME" ] && [ "$DM_LV_NAME" ]; then 49 lvchange_activate "$DM_VG_NAME/$DM_LV_NAME" 50 fi 51 ;; 52 53 /dev/*/*) 54 # Could be /dev/VG/LV; use lvs to check 55 if lvm lvs -- "$dev" >/dev/null 2>&1; then 56 lvchange_activate "$dev" 57 fi 58 ;; 59 esac 60 } 61 62 activate "$ROOT" 63 activate "/dev/mapper/vg0-usr" 64 activate "$resume" 65 66 exit 0 A line for /usr is in /etc/fstab using it's UUID ... same as root is referenced by UUID (both are in the same lvm2 volume group). NB: If /usr wasn't being used with lvm2, then this problem might not have surfaced and it probably would not have been a problem if the whole VG was activated instead of just the root file system because the UUID would have been "known or attainable" from the logical volumes. Kind Regards AndrewM
--- Begin Message ---Hi, I had another "simple" server upgrade from Devuan Ascii to Devuan Beowulf, these are the details and my work around for the problem. There was nothing particularly special about this server, it doesn't use encrypted file systems; it started out life as a Debian Wheezy installation, migrated to Devuan Jessie and later to Devuan Ascii and now Beowulf. The server has /boot on it's own RAID1 partition with another RAID1 volume for the rest of the disk being an LVM2 volume group having a number of logical volumes for root, swap, /usr/, /var/, /home/ and more. After the dist-upgrade, it failed to boot and remained at the ministrants shell environment after having complained about not being able to find the /usr file system via it's UUID. It had another error as well which was fixed by allocating 25% to RUNSIZE variable (up from 10%) in /etc/initramfs-tools/initramfs.conf - it was unable to find "rm" when running the boot up scripts before dumping itself to the initramfs shell. Once at the initramfs prompt after fixing the first problem, I was able to do the following: (initramfs) lvm lvm> vgchange -ay lvm> exit (initramfs) exit And then the server would continue to boot properly. _The second fix, which I consider to be "clunky", was to adjust the /usr/share/initramfs-tools/scripts/local-top/lvm2 file, adding in a line near the bottom as highlighted_ activate "$ROOT" *activate "/dev/mapper/vg0-usr"* activate "$resume" Then I rebuilt the initramfs in the usual way. update-initramfs -u -k all The original lvm2 script specifically only activated the root file system (/dev/mapper/vg0-root), even though /usr (/dev/mapper/vg0-usr) was in the exact same volume group as a separate file system, thus stopping boot from succeeding as expected. Other volumes come online in due course okay. All was good with subsequent reboots. Now, cludge or clunky, this was required because the /usr file system was [and continues to be] separate to the root file system and the initramfs only cared to enable the root file system, leaving all other logical volumes as being "NOT AVAILABLE", including /usr which was definitely required! Have I fixed this appropriately, or should I some how fix it another way? Kind Regards AndrewMsignature.asc
Description: OpenPGP digital signature_______________________________________________ Dng mailing list d...@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
--- End Message ---
signature.asc
Description: OpenPGP digital signature