There can be other reasons to do a restart-migration, see also https://bugzilla.proxmox.com/show_bug.cgi?id=4530, so I feel like this should be split. One for introducing target-reboot and one for introducing target-cpu.
For 'target-reboot', the question is if we should call it 'restart' like for container migration for consistency? Could also be introduced for normal migration while we're at it. One could argue that a true 'restart' migration would migrate the volumes also offline, but right now, I don't see a big downside to do it via NBD like in this patch. Still, something we should think about. If it turns out to be really needed, we'd need two different ways to do a restart migration :/ Am 28.09.23 um 16:45 schrieb Alexandre Derumier: > This patch add support for remote migration when target > cpu model is different. > > target-reboot param need to be defined to allow migration > whens source vm is online. > > When defined, only the live storage migration is done, > and instead to transfert memory, we cleanly shutdown source vm > and restart the target vm. (like a virtual reboot between source/dest) Missing your Signed-off-by > --- > PVE/API2/Qemu.pm | 23 ++++++++++++++++++++++- > PVE/CLI/qm.pm | 11 +++++++++++ > PVE/QemuMigrate.pm | 31 +++++++++++++++++++++++++++++-- > 3 files changed, 62 insertions(+), 3 deletions(-) > > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm > index 774b0c7..38991e9 100644 > --- a/PVE/API2/Qemu.pm > +++ b/PVE/API2/Qemu.pm > @@ -4586,6 +4586,17 @@ __PACKAGE__->register_method({ > optional => 1, > default => 0, > }, > + 'target-cpu' => { > + optional => 1, > + description => "Target Emulated CPU model. For online > migration, this require target-reboot option", To enforce it, you can use: requires => 'target-reboot', > + type => 'string', > + format => 'pve-vm-cpu-conf',> + }, > + 'target-reboot' => { > + type => 'boolean', > + description => "For online migration , don't migrate memory, > only storage. Then, the source vm is shutdown and the target vm is > restarted.", > + optional => 1, > + }, > 'target-storage' => get_standard_option('pve-targetstorage', { > completion => \&PVE::QemuServer::complete_migration_storage, > optional => 0, > @@ -4666,7 +4677,7 @@ __PACKAGE__->register_method({ > > if (PVE::QemuServer::check_running($source_vmid)) { > die "can't migrate running VM without --online\n" if > !$param->{online}; > - > + die "can't migrate running VM without --target-reboot when target > cpu is different" if $param->{'target-cpu'} && !$param->{'target-reboot'}; > } else { > warn "VM isn't running. Doing offline migration instead.\n" if > $param->{online}; > $param->{online} = 0; > @@ -4683,6 +4694,7 @@ __PACKAGE__->register_method({ > raise_param_exc({ 'target-bridge' => "failed to parse bridge map: $@" }) > if $@; > > + > die "remote migration requires explicit storage mapping!\n" > if $storagemap->{identity}; > Nit: unrelated change > @@ -5732,6 +5744,15 @@ __PACKAGE__->register_method({ > PVE::QemuServer::nbd_stop($state->{vmid}); > return; > }, > + 'restart' => sub { > + PVE::QemuServer::vm_stop(undef, $state->{vmid}, 1, 1); The first parameter is $storecfg and is not optional. To avoid deactivating the volumes, use the $keepActive parameter. > + my $info = PVE::QemuServer::vm_start_nolock( > + $state->{storecfg}, > + $state->{vmid}, > + $state->{conf}, > + ); > + return; > + }, > 'resume' => sub { > if > (PVE::QemuServer::Helpers::vm_running_locally($state->{vmid})) { > PVE::QemuServer::vm_resume($state->{vmid}, 1, 1); > diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm > index b17b4fe..9d89cfe 100755 > --- a/PVE/CLI/qm.pm > +++ b/PVE/CLI/qm.pm > @@ -189,6 +189,17 @@ __PACKAGE__->register_method({ > optional => 1, > default => 0, > }, > + 'target-cpu' => { > + optional => 1, > + description => "Target Emulated CPU model. For online > migration, this require target-reboot option", Again, can be enforced requires => 'target-reboot', > + type => 'string', > + format => 'pve-vm-cpu-conf', > + }, > + 'target-reboot' => { > + type => 'boolean', > + description => "For online migration , don't migrate memory, > only storage. Then, the source vm is shutdown and the target vm is > restarted.", > + optional => 1, > + }, > 'target-storage' => get_standard_option('pve-targetstorage', { > completion => \&PVE::QemuServer::complete_migration_storage, > optional => 0, > diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm > index 5ea78a7..8131b0b 100644 > --- a/PVE/QemuMigrate.pm > +++ b/PVE/QemuMigrate.pm > @@ -729,6 +729,11 @@ sub cleanup_bitmaps { > sub live_migration { > my ($self, $vmid, $migrate_uri, $spice_port) = @_; > > + if($self->{opts}->{'target-reboot'}){ > + $self->log('info', "target reboot - skip live migration."); using restart migration - skipping live migration > + return; > + } > + > my $conf = $self->{vmconf}; > > $self->log('info', "starting online/live migration on $migrate_uri"); > @@ -993,6 +998,7 @@ sub phase1_remote { > my $remote_conf = PVE::QemuConfig->load_config($vmid); > PVE::QemuConfig->update_volume_ids($remote_conf, $self->{volume_map}); > > + $remote_conf->{cpu} = $self->{opts}->{'target-cpu'} if > $self->{opts}->{'target-cpu'}; > my $bridges = map_bridges($remote_conf, $self->{opts}->{bridgemap}); > for my $target (keys $bridges->%*) { > for my $nic (keys $bridges->{$target}->%*) { > @@ -1356,7 +1362,14 @@ sub phase2 { > # finish block-job with block-job-cancel, to disconnect source VM from > NBD > # to avoid it trying to re-establish it. We are in blockjob ready state, > # thus, this command changes to it to blockjob complete (see qapi docs) > - eval { PVE::QemuServer::qemu_drive_mirror_monitor($vmid, undef, > $self->{storage_migration_jobs}, 'cancel'); }; > + my $finish_cmd = "cancel"; > + if ($self->{opts}->{'target-reboot'}) { > + # no live migration. > + # finish block-job with block-job-complete, the source will switch > to remote NDB > + # then we cleanly stop the source vm at phase3 Nit: "during phase3" or "in phase3" > + $finish_cmd = "complete"; > + } > + eval { PVE::QemuServer::qemu_drive_mirror_monitor($vmid, undef, > $self->{storage_migration_jobs}, $finish_cmd); }; > if (my $err = $@) { > die "Failed to complete storage migration: $err\n"; > } > @@ -1573,7 +1586,17 @@ sub phase3_cleanup { > }; > > # always stop local VM with nocheck, since config is moved already > - eval { PVE::QemuServer::vm_stop($self->{storecfg}, $vmid, 1, 1); }; > + my $shutdown_timeout = undef; > + my $shutdown = undef; > + my $force_stop = undef; > + if ($self->{opts}->{'target-reboot'}) { > + $shutdown_timeout = 180; > + $shutdown = 1; > + $force_stop = 1; > + $self->log('info', "clean shutdown of source vm."); Sounds like it already happened and with force_stop=1 it might not be as clean in the worst case ;). Maybe just "shutting down source VM"? > + } > + > + eval { PVE::QemuServer::vm_stop($self->{storecfg}, $vmid, 1, 1, > $shutdown_timeout, $shutdown, $force_stop); }; > if (my $err = $@) { > $self->log('err', "stopping vm failed - $err"); > $self->{errors} = 1; > @@ -1607,6 +1630,10 @@ sub phase3_cleanup { > # clear migrate lock > if ($tunnel && $tunnel->{version} >= 2) { > PVE::Tunnel::write_tunnel($tunnel, 10, "unlock"); > + if ($self->{opts}->{'target-reboot'}) { > + $self->log('info', "restart target vm."); > + PVE::Tunnel::write_tunnel($tunnel, 10, 'restart'); > + } > > PVE::Tunnel::finish_tunnel($tunnel); > } else { _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel