Re: [pve-devel] [PATCH-SERIES] remove replicated volumes on guest purge

2021-01-29 Thread Fabian Ebner
Am 28.01.21 um 17:20 schrieb Thomas Lamprecht: On 14.10.20 13:36, Fabian Ebner wrote: Introduces two helper functions in Replication.pm and ReplicationConfig.pm so that the guests can do the removal easily. destroy_vm contains a check whether the guest is still in use by a linked clone (in the

[pve-devel] [PATCH-SERIES v2 qemu-server] Cleanup migration code and improve migration disk cleanup

2021-01-29 Thread Fabian Ebner
This series intends to make the migration code more readable by simplyfing/unifying how we keep track of local volumes and splitting up sync_disks into multiple subroutines. This is done by keeping more information within the hash of local_volumes we obtain in the very beginning and re-using it la

[pve-devel] [PATCH v2 qemu-server 05/13] migration: fix calculation of bandwith limit for non-disk migration

2021-01-29 Thread Fabian Ebner
The case with: 1. no generic 'migration' limit from the storage plugin 2. a migrate_speed limit in the VM config was broken. It would assign 0 to migrate_speed when picking the minimum value and then default to the default value. Fix it by checking if bwlimit is 0 before picking the minimum. Also,

[pve-devel] [PATCH v2 qemu-server 07/13] migration: add nbd migrated volumes to volume_map earlier

2021-01-29 Thread Fabian Ebner
and avoid a little bit of duplication by creating a helper Signed-off-by: Fabian Ebner --- No changes from v1 PVE/QemuMigrate.pm | 30 ++ 1 file changed, 18 insertions(+), 12 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index db68371..b10638a 1

[pve-devel] [PATCH v2 qemu-server 06/13] migration: save targetstorage and bwlimit in local_volumes hash and re-use information

2021-01-29 Thread Fabian Ebner
It is enough to call get_bandwith_limit once for each source_storage. Signed-off-by: Fabian Ebner --- Changes from v1: * avoid a long line PVE/QemuMigrate.pm | 27 ++- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigra

[pve-devel] [PATCH v2 qemu-server 10/13] migration: use storage_migration for checks instead of online_local_volumes

2021-01-29 Thread Fabian Ebner
Like this we don't need to worry about auto-vivifaction. Signed-off-by: Fabian Ebner --- No changes from v1 PVE/QemuMigrate.pm | 24 +++- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 94f3328..09289a5 100644 --

[pve-devel] [PATCH v2 qemu-server 08/13] migration: simplify removal of local volumes and get rid of self->{volumes}

2021-01-29 Thread Fabian Ebner
This also changes the behavior to remove the local copies of offline migrated volumes only after the migration has finished successfully (this is relevant for mixed settings, e.g. online migration with unused/vmstate disks). local_volumes contains both, the volumes previously in $self->{volumes} a

[pve-devel] [PATCH v2 qemu-server 12/13] migration: split out replication from scan_local_volumes

2021-01-29 Thread Fabian Ebner
and avoid one loop over the config, by extending foreach_volid to include the drivename. Signed-off-by: Fabian Ebner --- New in v2 Having the drivename attribute might also be useful to make the refactor the target_drive handling, but that's something for a follow-up series On my first version

[pve-devel] [PATCH v2 qemu-server 13/13] migration: move finishing block jobs to phase2 for better/uniform error handling

2021-01-29 Thread Fabian Ebner
avoids the possibility to die during phase3_cleanup and instead of needing to duplicate the cleanup ourselves, benefit from phase2_cleanup doing so. The duplicate cleanup was also very incomplete: it didn't stop the remote kvm process (leading to 'VM already running' when trying to migrate again a

[pve-devel] [PATCH v2 qemu-server 03/13] migration: avoid re-scanning all volumes

2021-01-29 Thread Fabian Ebner
by using the information obtained in the first scan. This also makes sure we only scan local storages. Signed-off-by: Fabian Ebner --- No changes from v1 PVE/QemuMigrate.pm | 7 +++ 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index

[pve-devel] [PATCH v2 qemu-server 01/13] test: migration: add parse_volume_id calls

2021-01-29 Thread Fabian Ebner
so it fails when something bad comes in. Signed-off-by: Fabian Ebner --- New in v2, added because I ran into a problem with an early version of patch #12 which wasn't detected by the tests. See patch #12 for the details. test/MigrationTest/QemuMigrateMock.pm | 3 +++ test/MigrationTest/QmMock.

[pve-devel] [PATCH v2 qemu-server 04/13] migration: split out config_update_local_disksizes from scan_local_volumes

2021-01-29 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- No changes from v1 PVE/QemuMigrate.pm | 55 ++ 1 file changed, 31 insertions(+), 24 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index d0295f8..455581c 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/Qem

[pve-devel] [PATCH v2 qemu-server 11/13] migration: keep track of replicated volumes via local_volumes

2021-01-29 Thread Fabian Ebner
by extending filter_local_volumes. Signed-off-by: Fabian Ebner --- Changes from v1: * rebase (new check for is_replicated was introduced in the meantime) * move setting of replicated flag to earlier (previously it happend after run_replication) so that the next patch works PVE/Qe

[pve-devel] [PATCH v2 qemu-server 09/13] migration: cleanup_remotedisks: simplify and include more disks

2021-01-29 Thread Fabian Ebner
Namely, those migrated with storage_migrate by using the information from volume_map. Call cleanup_remotedisks in phase1_cleanup as well, because that's where we end if sync_offline_local_volumes fails, and some disks might already have been transfered successfully. Note that the local disks are st

[pve-devel] [PATCH v2 qemu-server 02/13] migration: split sync_disks into two functions

2021-01-29 Thread Fabian Ebner
by making local_volumes class-accessible. One functions is for scanning all local volumes and one is for actually syncing offline volumes via storage_migrate. The exception is replicated volumes, this still happens during the scan for now. Also introduce a filter_local_volumes helper, to makes li