# Summary With default settings, LVM autoactivates LVs when it sees a new VG, e.g. after boot or iSCSI login. In a cluster with guest disks on a shared LVM VG (e.g. on top of iSCSI/Fibre Channel (FC)/direct-attached SAS), this can indirectly cause guest creation or migration to fail. See bug #4997 [1] and patch #1 for details.
The primary goal of this series is to avoid autoactivating thick LVs that hold guest disks in order to fix #4997. For this, it patches the LVM storage plugin to create new LVs with autoactivation disabled, and implements a pve8to9 check and subcommand to disable autoactivation on existing LVs (see below for details). The series does the same for LVM-thin storages. While LVM-thin storages are inherently local and cannot run into #4997, it can still make sense to avoid unnecessarily activating thin LVs at boot. # Why not before PVE 9 As discussed in v2, we should only apply this series for PVE 9, as we can be sure all nodes are running at least PVE 8 then. Here is why we couldn't apply it for PVE 8 already. PVE 7/Bullseye's LVM does not know `--setautoactivation`. A user upgrading from PVE 7 will temporarily have a mixed 7/8 cluster. Once this series is applied, the PVE 8 nodes will create new LVs with `--setautoactivation n`, which the PVE 7 nodes do not know. In my tests, the PVE 7 nodes can read/interact with such LVs just fine, *but*: As soon as a PVE 7 node creates a new (unrelated) LV, the `--setautoactivation n` flag is reset to default `y` on *all* LVs of the VG. I presume this is because creating a new LV rewrites metadata, and the PVE 7 LVM doesn't write out the `--setautoactivation n` flag. I imagine (have not tested) this will cause problems on a mixed cluster. # pve8to9 script As discussed in v2+v4, this series implements (a) a pve8to9 check to detect thick and thin LVs with autoactivation enabled (b) a script to disable autoactivation on LVs when needed, intended to be run manually by the user during 8->9 upgrade As suggested by Thomas in v4, the script (b) is installed under /usr/share/pve-manager/migrations/. # Bonus fix for FC/SAS multipath+LVM issue As it turns out, this series seems to additionally fix an issue on hosts with LVM on FC/SAS-attached LUNs *with multipath* where LVM would report "Device mismatch detected" warnings because the LVs are activated too early in the boot process before multipath is available. Our current suggested workaround is to install multipath-tools-boot [2]. With this series applied and when users have upgraded to 9, this shouldn't be necessary anymore, as LVs are not auto-activated after boot. # Interaction with zfs-initramfs zfs-initramfs used to ship an an initramfs-tools script that unconditionally activates *all* VGs that are visible at boot time, ignoring the autoactivation flag. A fix was already applied in v2 [3]. # Patch summary - Patch #1 makes the LVM plugin create new LVs with `--setautoactivation n` - Patch #2 makes the LVM-thin plugin disable autoactivation for new LVs - Patch #3 adds a pve8to9 check for LVM/LVM-thin LV autoactivation, and a migration script to disable autoactivation for existing LVs # Changes since v4 - move the migration code to a dedicated script under /usr/share/pve-manager/migrations, and ask for user confirmation before taking action (thx Thomas!) - batch lvchange calls and release lock inbetween (thx Fabian!) - drop patch that moves `pve8to9` to `pve8to9 checklist` - drop patch that ran `proxmox-perltidy` on pve8to9 # Changes since v3 - rebase and run perltidy - see individual patches for other changes - @Michael tested+reviewed v3 (thx!), but since there were some code changes, I'm not including the trailers here, even though the changes on #1+#2 were mostly comments/formatting changes. # Changes since v2 - drop zfsonlinux patch that was since applied - add patches for LVM-thin - add pve8to9 patches v3: https://lore.proxmox.com/pve-devel/20250429113646.25738-1-f.we...@proxmox.com/ v2: https://lore.proxmox.com/pve-devel/20250307095245.65698-1-f.we...@proxmox.com/ v1: https://lore.proxmox.com/pve-devel/20240111150332.733635-1-f.we...@proxmox.com/ [1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997 [2] https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#%22Device_mismatch_detected%22_warnings [3] https://lore.proxmox.com/pve-devel/ad4c806c-234a-4949-885d-8bb369860...@proxmox.com/ pve-storage: Friedrich Weber (2): fix #4997: lvm: create: disable autoactivation for new logical volumes lvmthin: disable autoactivation for new logical volumes src/PVE/Storage/LVMPlugin.pm | 13 ++++++++++++- src/PVE/Storage/LvmThinPlugin.pm | 17 ++++++++++++++++- 2 files changed, 28 insertions(+), 2 deletions(-) pve-manager: Friedrich Weber (1): pve8to9: check for LVM autoactivation and provide migration script PVE/CLI/pve8to9.pm | 86 +++++++++++++- bin/Makefile | 7 +- bin/pve-lvm-disable-autoactivation | 174 +++++++++++++++++++++++++++++ 3 files changed, 265 insertions(+), 2 deletions(-) create mode 100755 bin/pve-lvm-disable-autoactivation Summary over all repositories: 5 files changed, 293 insertions(+), 4 deletions(-) -- Generated by git-murpp 0.8.1 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel