--- Begin Message ---
> how are you hooking the migration state to know whether deactivation
should be done or not?
By using the VM property "lock" which must be "migrate":
PVE::Cluster::get_guest_config_properties(['lock']);
> qm start (over SSH, is this being killed?)
> -> start_vm task worker (
> Denis Kanchev hat am 02.06.2025 15:23 CEST
> geschrieben:
>
>
> We tend to prevent having a volume active on two nodes, as may lead to data
> corruption, so we detach the volume from all nodes ( except the target one )
> via our shared storage system.
> In the sub activate_volume() our log
--- Begin Message ---
We tend to prevent having a volume active on two nodes, as may lead to data
corruption, so we detach the volume from all nodes ( except the target one
) via our shared storage system.
In the sub activate_volume() our logic is to not detach the volume from
other hosts in case o
> Denis Kanchev hat am 02.06.2025 11:18 CEST
> geschrieben:
>
>
> My bad :) in terms of Proxmox it must be hand-overing the storage control -
> the storage plugin function activate_volume() is called in our case, which
> moves the storage to the new VM.
> So no data is moved across the node
--- Begin Message ---
My bad :) in terms of Proxmox it must be hand-overing the storage control -
the storage plugin function activate_volume() is called in our case, which
moves the storage to the new VM.
So no data is moved across the nodes and only the volumes get re-attached.
Thanks for the ple
> Denis Kanchev hat am 02.06.2025 10:35 CEST
> geschrieben:
>
>
> > I thought your storage plugin is a shared storage, so there is no storage
> > migration at all, yet you keep talking about storage migration?It's a
> > shared storage indeed, the issue was that the migration process on the
--- Begin Message ---
> I thought your storage plugin is a shared storage, so there is no storage
migration at all, yet you keep talking about storage migration?
It's a shared storage indeed, the issue was that the migration process on
the destination host got OOM killed and the migration failed, m
> Denis Kanchev hat am 29.05.2025 09:33 CEST
> geschrieben:
>
>
> The issue here is that the storage plugin activate_volume() is called after
> migration cancel which in case of network shared storage can make things bad.
> This is a sort of race condition, because migration_cancel wont stop
--- Begin Message ---
The issue here is that the storage plugin activate_volume() is called after
migration cancel which in case of network shared storage can make things
bad.
This is a sort of race condition, because migration_cancel wont stop the
storage migration on the remote server. As you can
> Denis Kanchev hat am 28.05.2025 08:13 CEST
> geschrieben:
>
>
> Here is the task log
> 2025-04-11 03:45:42 starting migration of VM 2282 to node 'telpr01pve05'
> (10.10.17.5)
> 2025-04-11 03:45:42 starting VM 2282 on remote node 'telpr01pve05'
> 2025-04-11 03:45:45 [telpr01pve05] Warning:
--- Begin Message ---
Here is the task log
2025-04-11 03:45:42 starting migration of VM 2282 to node 'telpr01pve05'
(10.10.17.5)
2025-04-11 03:45:42 starting VM 2282 on remote node 'telpr01pve05'
2025-04-11 03:45:45 [telpr01pve05] Warning: sch_htb: quantum of class 10001
is big. Consider r2q change
> Denis Kanchev hat am 22.05.2025 08:55 CEST
> geschrieben:
>
>
> The parent of the storage migration process gets killed.
>
> It seems that this is the desired behavior and as far i understand it
> correctly - the child worker is detached from the parent and it has
> nothing to do with it
--- Begin Message ---
The parent of the storage migration process gets killed.
It seems that this is the desired behavior and as far i understand it
correctly - the child worker is detached from the parent and it has
nothing to do with it after spawning.
Thanks for the information, it was ver
> Denis Kanchev via pve-devel hat am 21.05.2025
> 15:13 CEST geschrieben:
> Hello,
>
> We had an issue with a customer migrating a VM between nodes using our
> shared storage solution.
>
> On the target host the OOM killer killed the main migration process, but
> the child process (which act
14 matches
Mail list logo