Superseded-by: https://lore.proxmox.com/pve-devel/20250429113646.25738-1-f.we...@proxmox.com/
Some comments inline. On 12/03/2025 09:39, Friedrich Weber wrote: > On 11/03/2025 11:40, Fabian Grünbichler wrote: >> On March 10, 2025 3:01 pm, Friedrich Weber wrote: >>> On 07/03/2025 13:14, Fabian Grünbichler wrote: >>>>> # LVM-thick/LVM-thin >>>>> >>>>> Note that this change affects all LVs on LVM-thick, not just ones on >>>>> shared >>>>> storage. As a result, also on single-node hosts, local guest disk LVs on >>>>> LVM-thick will not be automatically active after boot anymore (after >>>>> applying >>>>> all patches of this series). Guest disk LVs on LVM-thin will still be >>>>> auto-activated, but since LVM-thin storage is necessarily local, we don't >>>>> run >>>>> into #4997 here. >>>> >>>> we could check the shared property, but I don't think having them not >>>> auto-activated hurts as long as it is documented.. >>> >>> This is referring to LVs on *local* LVM-*thick* storage, right? In that >>> case, I'd agree that not having them autoactivated either is okay >>> (cleaner even). >> >> yes >> >>> The patch series currently doesn't touch the LvmThinPlugin at all, so >>> all LVM-*thin* LVs will still be auto-activated at boot. We could also >>> patch LvmThinPlugin to create new thin LVs with `--setautoactivation n` >>> -- though it wouldn't give us much, except consistency with LVM-thick. >> >> well, if you have many volumes not activating them automatically might >> also save some time on boot ;) but yeah, shouldn't cause any issues. > > Yeah, then I'll limit this change to the LVMPlugin and keep the > LvmThinPlugin unchanged. I changed my mind and included LvmThinPlugin in v3 (see patches there), but happy to reconsider. >>>>> # Transition to LVs with `--setautoactivation n` >>>>> >>>>> Both v1 and v2 approaches only take effect for new LVs, so we should >>>>> probably >>>>> also have pve8to9 check for guest disks on (shared?) LVM that have >>>>> autoactivation enabled, and suggest to the user to manually disable >>>>> autoactivation on the LVs, or even the entire VG if it holds only >>>>> PVE-managed >>>>> LVs. >>>> >>>> if we want to wait for PVE 9 anyway to start enabling (disabling? ;)) it, >>>> then >>>> the upgrade script would be a nice place to tell users to fix up their >>>> volumes? >>> >>> The upgrade script being pve8to9, right? I'm just not sure yet what to >>> suggest: `lvchange --setautoactivation n` on each LV, or simply >>> `vgchange --setautoactivation n` on the whole shared VG (provided it >>> only contains PVE-managed LVs). >> >> yes. >> >>> >>>> OTOH, setting the flag automatically starting with PVE 9 also for existing >>>> volumes should have no downsides, [...] >>> >>> Hmm, but how would be do that automatically? >> >> e.g., once on upgrading in postinst, or in activate_storage if we find a >> cheap way to skip doing it over and over ;) >> >>> >>>> we need to document anyway that the behaviour >>>> there changed (so that people that rely on them becoming auto-activated on >>>> boot >>>> can adapt whatever is relying on that).. or we could provide a script that >>>> does >>>> it post-upgrade.. >>> >>> Yes, an extra script to run after the upgrade might be an option. Though >>> we'd also need to decide whether to deactivate on each individual LV, or >>> the complete VG (then we'd just assume that there no other >>> non-PVE-managed LVs in the VG that the user wants autoactivated). >> >> I think doing it on each volume managed by PVE is the safer and more >> consistent option.. > > Sounds good! > > Discussed the next steps with Fabian off-list, the summary: > > - I'll wait a few days if zfsonlinux patch 1 gets applied upstream so I > can send a proper cherry-pick. We can apply that one before PVE 9, as it > shouldn't break any reasonable setup (= setup that doesn't have ZFS on > an LVM LV with autoactivation disabled), and it allows the manual > `vgchange --setautoactivation n` workaround to work for FC/SAS LUNs > before PVE 9. this was applied both upstream [1] and included in our zfsonlinux package [2]. > - we keep something like patch 2+3 which pass `--setautoactivation n` to > lvcreate in LVMPlugin's `alloc_image`. This can only be applied for PVE > 9 (see "Mixed 7/8 cluster" in cover letter). It solves the > auto-activation problem for new volumes. still done similarly in v3 > - to solve the auto-activation problem for existing volumes, we > > - ship a script (a separate script or even a subcommand of pve8to9?) > that disables autoactivation for all PVE-managed LVs on shared VGs, > after asking for user confirmation. The user has to run this script > manually. The script probably needs to take care to cluster-lock the > storage, as disabling autoactivation triggers a metadata update. I went for a pve8to9 subcommand in v3. > - add a check to pve8to9 for PVE-managed LVs with autoactivation > enabled (on shared VGs) which suggests to run the script from the > previous point done in v3 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel