On March 10, 2025 3:01 pm, Friedrich Weber wrote:
> On 07/03/2025 13:14, Fabian Grünbichler wrote:
>>> # LVM-thick/LVM-thin
>>>
>>> Note that this change affects all LVs on LVM-thick, not just ones on shared
>>> storage. As a result, also on single-node hosts, local guest disk LVs on
>>> LVM-thick will not be automatically active after boot anymore (after 
>>> applying
>>> all patches of this series). Guest disk LVs on LVM-thin will still be
>>> auto-activated, but since LVM-thin storage is necessarily local, we don't 
>>> run
>>> into #4997 here.
>> 
>> we could check the shared property, but I don't think having them not
>> auto-activated hurts as long as it is documented..
> 
> This is referring to LVs on *local* LVM-*thick* storage, right? In that
> case, I'd agree that not having them autoactivated either is okay
> (cleaner even).

yes

> The patch series currently doesn't touch the LvmThinPlugin at all, so
> all LVM-*thin* LVs will still be auto-activated at boot. We could also
> patch LvmThinPlugin to create new thin LVs with `--setautoactivation n`
> -- though it wouldn't give us much, except consistency with LVM-thick.

well, if you have many volumes not activating them automatically might
also save some time on boot ;) but yeah, shouldn't cause any issues.

>>> # Transition to LVs with `--setautoactivation n`
>>>
>>> Both v1 and v2 approaches only take effect for new LVs, so we should 
>>> probably
>>> also have pve8to9 check for guest disks on (shared?) LVM that have
>>> autoactivation enabled, and suggest to the user to manually disable
>>> autoactivation on the LVs, or even the entire VG if it holds only 
>>> PVE-managed
>>> LVs.
>> 
>> if we want to wait for PVE 9 anyway to start enabling (disabling? ;)) it, 
>> then
>> the upgrade script would be a nice place to tell users to fix up their 
>> volumes?
> 
> The upgrade script being pve8to9, right? I'm just not sure yet what to
> suggest: `lvchange --setautoactivation n` on each LV, or simply
> `vgchange --setautoactivation n` on the whole shared VG (provided it
> only contains PVE-managed LVs).

yes.

> 
>> OTOH, setting the flag automatically starting with PVE 9 also for existing
>> volumes should have no downsides, [...]
> 
> Hmm, but how would be do that automatically?

e.g., once on upgrading in postinst, or in activate_storage if we find a
cheap way to skip doing it over and over ;)

> 
>> we need to document anyway that the behaviour
>> there changed (so that people that rely on them becoming auto-activated on 
>> boot
>> can adapt whatever is relying on that).. or we could provide a script that 
>> does
>> it post-upgrade..
> 
> Yes, an extra script to run after the upgrade might be an option. Though
> we'd also need to decide whether to deactivate on each individual LV, or
> the complete VG (then we'd just assume that there no other
> non-PVE-managed LVs in the VG that the user wants autoactivated).

I think doing it on each volume managed by PVE is the safer and more
consistent option..

>>> We could implement something on top to make the transition smoother, some 
>>> ideas:
>>>
>>> - When activating existing LVs, check the auto activation flag, and if auto
>>>   activation is still enabled, disable it.
>> 
>> the only question is whether we want to "pay" for that on each 
>> activate_volume?
> 
> Good question. It does seem a little extreme, also considering that once
> all existing LVs have autoactivation disabled, all new LVs will have the
> flag disabled as well and the check becomes obsolete.
> 
> It just occurred to me that we could also pass `--setautoactivation n`
> to `lvchange -ay` in `activate_volume`, but a test shows that this
> triggers a metadata update on *each activate_volume*, which sounds like
> a bad idea.

yeah that doesn't sound too good ;)


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to