This bug was fixed in the package lvm2 - 2.02.176-4.1ubuntu3.18.04.2
---
lvm2 (2.02.176-4.1ubuntu3.18.04.2) bionic; urgency=medium
* d/p/fix-auto-activation-at-boot.patch: (LP: #1854981)
Allow LV auto-activation (e.g. /usr on it's separate LV)
---
** Changed in: lvm
Feel free to test lvm2 in bionic-proposed (2.02.176-4.1ubuntu3.18.04.2)
and provide feedback to #1854981 .
- Eric
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1573982
Title:
LVM boot problem - vol
Please see (LP: #1854981).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1573982
Title:
LVM boot problem - volumes not activated after upgrade to Xenial
To manage notifications about this bug go to
I don't believe this is a curtin issue; I've marked it as Invalid for
curtin. (Please do set it back to New if this is an error!)
** Changed in: curtin
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubun
I ran to the same issue yesterday with up-to-date (May 24,2019) 18.04.02
LTS. As mentioned above problem is /usr being on a separate LV.
packages:
initramfs-tools-bin 0.130ubuntu3.7
linux-image-generic 4.15.0.50.52
udev 237-3ubuntu10.21
libdevmapper1.02.1:amd64 2:1.02.145-4.1ubuntu3
It drops to t
Well. Today I installed a fresh 18.04 server and just ran into this issue.
My disk setup is as following:
/dev/sda1 - bios
/dev/sda2 - /boot
/dev/sdb (LVM)
- vg-0/Usr
- vg-0/Home
- vg-0/Root
- vg-0/Swap
/dev/sdc (LVM
- vg-1/Var
Upon reboot I get an error within initramfs that the root w
** Changed in: maas
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1573982
Title:
LVM boot problem - volumes not activated after upgrade to Xenial
To manage noti
** Tags added: id-5c51da4d8556ee2e7ae3a108
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1573982
Title:
LVM boot problem - volumes not activated after upgrade to Xenial
To manage notifications abou
After investigation the problem is inside pvscan itself
I test with and without this commit and it does the trick.
Without I can reproduce, with I can't.
---
commit 15da467b52465076a8d587b94cc638bab8a0a95c
Author: David Teigland
Date: Wed Jun 15 14:19:18 2016 -0500
pvscan: do activation when lv
After investigation the problem is inside pvscan itself
I test with and without this commit and it does the trick.
With I can reproduce, without I can't
I'll provide a test package for impacted user to provide feedback before
I upload anything in the archive.
---
commit 15da467b52465076a8d587b9
After investigation the problem is inside pvscan itself
I test with and without this commit and it does the trick.
Without I can reproduce, with I can't.
I'll provide a test package for impacted user to provide feedback before
I upload anything in the archive.
---
commit 15da467b52465076a8d587b9
Confused to see no movement on this bug?
The logical thing seemed to be add another case to /usr/share/initramfs-
tools/scripts/local-top/lvm2 calling lvchange_activate with no
parameters, but it seems that doesn't work - does
activation/auto_activation_volume_list need to be set in lvm.conf
perha
Could someone please describe me how to add the patch from TJ?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1573982
Title:
LVM boot problem - volumes not activated after upgrade to Xenial
To manag
This bug report just enabled me to recover from an upgrade to Ubuntu
18.04.1. So I can confirm that this is still an issue.
Root partition on an LVM volume; LVM physical volume on a software
(mdadm) RAID.
The workaround in this comment solved the problem for me:
https://bugs.launchpad.net/ubuntu
This bug report enabled me to recover quickly from a planned upgrade
(14.04 -> 16.04) that went south. FWIW I'm able to confirm that it's a
live issue.
All of our critical workstations are deployed with lvs on top of md
devices. Some, including the one I was upgrading, use md mirrors.
FWIW:
$ ca
Patch works fine for me... Kinda odd it's been two years and it hasn't
been rolled into the upgrade. 70 machines I have to patch after
upgrading :/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1573982
The attachment "activate VGs when root=UUID=" seems to be a patch. If
it isn't, please remove the "patch" flag from the attachment, remove the
"patch" tag, and if you are a member of the ~ubuntu-reviewers,
unsubscribe the team.
[This is an automated message performed by a Launchpad user owned by
Attached is a patch (generated on 16.04) that activates volume groups
when root=UUID=... is on the kernel command-line.
** Patch added: "activate VGs when root=UUID="
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1573982/+attachment/5038702/+files/lvm2_local_top.patch
--
You received
The machine I was using has been redeployed without LVM. If I get a
chance to redeploy I'll grab the requested logs. It's fairly trivial to
trigger if you have a machine available to deploy with lvm boot as
described above.
--
You received this bug notification because you are a member of Ubuntu
** Changed in: curtin
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1573982
Title:
LVM boot problem - volumes not activated after upgrade to Xenial
To manage notifi
@Chris,
Can you attach the output of:
maas machine get-curtin-config
And attach the follow curtin log (you can grab that from the UI under
the Installation tab).
Also, this seems an issue widely with Ubuntu.
Curtin is the one that writes this configuration, so marking this as
Incomplete for
** Package changed: maas (Ubuntu) => maas
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1573982
Title:
LVM boot problem - volumes not activated after upgrade to Xenial
To manage notifications about
I've run across this today and it affects MAAS.
MAAS version: 2.2.2 (6099-g8751f91-0ubuntu1~16.04.1)
Configuring an LVM based drive with a raid on top of it for the root
partition will trigger this. Deploying the default kernel / OS will fail
due to inactive volume groups.
The fix as expected:
lv
I ran across the same bug. It was caused by the root filesystem being
specified on the kernel command line with the root=UUID= syntax.
This is not handled by the case "$dev" in stanza in activate() in
/usr/share/initramfs-tools/scripts/local-top/lvm2. See attached
screenshot. If I change the kernel
Facing a similar problem on a debootstrap rootfs.
Even after ensuring that the lvm2 package is installed (and hence the
initramfs scripts are present) I still get dropped to a shell in the
initramfs. Running `lvchange -ay` causes the volume to show up and
subsequently the bootup will succeed. I pr
Last night I ran into the same problem. I upgraded from 12.04 LTS to 16.04.1
LTS Server and got stuck at boot.
The last message complained about a UUID not being present. It turned out it
was the /usr FS. Doing an "lvm lvscan" from the initrd prompt showed all but
one LVs inactive. The only one
I wonder if this is due to the use of systemd. As seen on
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=774082 for Debian.
** Bug watch added: Debian Bug tracker #774082
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=774082
--
You received this bug notification because you are a member
Faced with the same behavior yesterday, the only workaround for me
became adding line "vgchange -ay" to /usr/share/initramfs-tools/scripts
/local-top/lvm2.
Didn't change any config for a couple of months before this issue, only
executed apt-get upgrade on regular basis.
However, now I get the fol
I can confirm same issue here after upgrade or 14.04 to 16.04.
Note that on my system, / is not on LVM.
lvm is not initiated at boot time nor at init time and the system gave
up mounting /usr (/ is not on LVM on my system). For me, this is even
worst, even when / is mounted and we are supposed to
I can confirm this bug to be present also in lvm2 (2.02.133-1ubuntu10).
I got the affected system (upgraded via do-release-upgrade on
09.08.2016) back up with above mentioned workaround:
Creating /etc/initramfs-tools/scripts/local-top/lvm2 script doing "lvm
vgchange -ay". And making it executable
The apparent cause seems to be lvm2 (2.02.133-1ubuntu8). From the
Changelog (https://launchpad.net/ubuntu/xenial/+source/lvm2/+changelog)
lvm2 (2.02.133-1ubuntu8) xenial; urgency=medium
* Drop debian/85-lvm2.rules. This is redundant now, VGs are already
auto-assembled via lvmetad and 69-lvm
I just ran into this upgrading from 14.04. My system is a btrfs raid
across two LVM Volume Groups. Both volume groups need to be activated
at boot, before the "btrfs device scan". The system used to do this.
Putting a vgchange in a script in local-top fixes this.
Thanks!
--
You received this
My workaround is as I explained in the issue description: I added a
script in /etc/initramfs-tools/scripts/local-top/ folder which performs
`vgchange -ay`.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs
I'm not seeing ANY LVM volumes active on system boot.
(I'm not putting any of the necessary boot paths on LVM).
After booting the system, the volume is visible but not active.
If I put one of the drives in sftab, booting Ubuntu breaks.
Is there a workaround to make the system do "vgchange -a y" d
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: lvm2 (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1573982
Title:
LVM b
** Description changed:
Soon after upgrade to Xenial (from 15.10) the boot process got broken.
I'm using LVM for /root swap and other partitions.
===
The current behaviour is:
When I boot short after the Grub login screen I'm getting log messages
like:
---
Scanning for Btr
36 matches
Mail list logo