Re: [pve-devel] [PATCH manager v8 1/2] fix #4849: api: download to storage: automatically dectect and configure compression
> There are code paths where $filename is not yet defined here, resulting > in a rather ugly warning – so this needs upfront checking too – always > check where the value code path is coming in (yeah, Rust would do that for > you, but most API endpoints are small enough to be able to do so quickly also > manually) I will make a new patch resolving this issue. While I am on it, I see what I can resolve on the other issues. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH manager v8 1/2] fix #4849: api: download to storage: automatically dectect and configure compression
Am 27/09/2023 um 10:03 schrieb Philipp Hufnagl: >> There are code paths where $filename is not yet defined here, resulting >> in a rather ugly warning – so this needs upfront checking too – always >> check where the value code path is coming in (yeah, Rust would do that for >> you, but most API endpoints are small enough to be able to do so quickly >> also manually) > > I will make a new patch resolving this issue. While I am on it, I see > what I can resolve on the other issues. FYI, I have a pretty much done patch for this already here, so if you did not have anything (close to) already finished, I could push that out – just mentioning to avoid potential duplicate work. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH manager v8 1/2] fix #4849: api: download to storage: automatically dectect and configure compression
On 9/27/23 10:33, Thomas Lamprecht wrote: > Am 27/09/2023 um 10:03 schrieb Philipp Hufnagl: >>> There are code paths where $filename is not yet defined here, resulting >>> in a rather ugly warning – so this needs upfront checking too – always >>> check where the value code path is coming in (yeah, Rust would do that for >>> you, but most API endpoints are small enough to be able to do so quickly >>> also manually) >> >> I will make a new patch resolving this issue. While I am on it, I see >> what I can resolve on the other issues. > > FYI, I have a pretty much done patch for this already here, so if > you did not have anything (close to) already finished, I could push > that out – just mentioning to avoid potential duplicate work. I have nothing close. Sorry for the inconvenience and thank you for fixing. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC manager/container/qemu-server/guest-common 0/4] fix #4474: stop tasks may overrule shutdown tasks
Lost track of this a bit, reviving due to user interest [1]. As the series does not apply anymore, I'll send a new version in any case, but wanted to ask for feedback before I do. My questions from the cover letter still apply: On 26/01/2023 09:32, Friedrich Weber wrote: > * Does it make sense to have overruling optional? Or should "stop" > generally overrule shutdown? This might lead to confusing > interactions, as Thomas noted [0]. > * Backend: Is there a more elegant way to overrule shutdown tasks, > and a better place than pve-guest-common? > * Frontend: When stopping a VM/CT, we already ask for confirmation. > Is an (occasional) second modal dialog with a lot of text a good user > experience? Alternatively, I could imagine a checkbox in the first > dialog saying "Overrule any active shutdown tasks". Actually I don't really like the second modal dialog. What about the following: When the user clicks "Stop" and the frontend detects an active shutdown task, the already-existing "Confirm" dialog has an additional default-off checkbox "Kill active shutdown tasks" (or similar). This way the default behavior does not change, but users do not have to kill active shutdown tasks manually anymore. > * This patch series forbids `overrule-shutdown=1` for HA-managed VMs/CTs > because I didn't know how overruling should work in a HA setting. Do > you have any suggestions? > > Since this is my first patch with more than a few lines, I'm especially > happy about feedback regarding coding style, naming, anything. :) [1] https://forum.proxmox.com/threads/16235/post-590240 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] Thank you for being part of pve-devel
On 9/26/23 00:34, Alexandre Aguirre via pve-devel wrote: Hello goodnight ! My name is Alexandre, I'm Brazilian and I've been a proxmox user since 2015, I'm an active collaborator in BR communities as well as groups on Telegram and WhatsApp, I'd like to know how I can collaborate with the solution, I'm available to be part of the growth and evolution of the tool. Now I have a question, I would like to know how this discussion list works. Thank you in advance for your attention! Hi, thanks for your message. In general you can find information about how to contribute on our contribution page[0]. This includes links to developer documentation, bug tracker, mailing lists, etc. More concrete, this list is intended for communication between developers, where we send patches, reviews, discuss them, etc. So I'd recommend you look in the mentioned link for a way you can help :) Hope this answers your request. If not, just reply to this message with your questions. with kind regards Dominik 0: https://www.proxmox.com/en/about/developers ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] applied: [PATCH v3 docs 1/1] fix #2920: add cifs options parameter
Is it possible that this pve-docs patch got lost and was not actually applied? At least I don't see it here: https://git.proxmox.com/?p=pve-docs.git;a=history;f=pve-storage-cifs.adoc;h=df63b58d6eefc5c7e2ee302e4ac57fa52c8c372e;hb=0aa61e04787ca6ac791fe6bce28686c9a9fc9ade ... whereas the pve-storage patch was applied fine: https://git.proxmox.com/?p=pve-storage.git;a=commit;h=13ee4fc8593f4c247a65448b9c746643ce5d3c0c On 07/06/2023 17:46, Thomas Lamprecht wrote: > Am 01/03/2023 um 13:13 schrieb Fiona Ebner: >> From: Stefan Hrdlicka >> >> Signed-off-by: Stefan Hrdlicka >> [FE: rebase + style fix] >> Signed-off-by: Fiona Ebner >> --- >> >> Changes from v2: >> * use {pve} instead of PVE >> >> pve-storage-cifs.adoc | 8 >> 1 file changed, 8 insertions(+) >> >> > > applied, thanks! > > > ___ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH v3 docs 1/1] fix #2920: add cifs options parameter
Am 01/03/2023 um 13:13 schrieb Fiona Ebner: > From: Stefan Hrdlicka > > Signed-off-by: Stefan Hrdlicka > [FE: rebase + style fix] > Signed-off-by: Fiona Ebner > --- > > Changes from v2: > * use {pve} instead of PVE > > pve-storage-cifs.adoc | 8 > 1 file changed, 8 insertions(+) > > Now actually applied, thanks to you and also to Friedrich noticing this error of mine! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH manager master stable-7] pve7to8: Fix Fedora 38 systemd unified cgroupv2 check
Am 28/08/2023 um 09:54 schrieb Christian Ebner: > For Fedora 38 the systemd shared object files used to check the systemd > version are located at /usr/lib64/systemd or /usr/lib/systemd. > Therefore, include /usr/lib64/systemd in the list of directories to > check. > > Further, Fedora 38 adds a fc38 postfix to the filename, so expand the > regex to cover that as well. > > Signed-off-by: Christian Ebner > --- > > Reported by users via the forum: > https://forum.proxmox.com/threads/128721/#post-584456 > > PVE/CLI/pve7to8.pm | 10 -- > 1 file changed, 8 insertions(+), 2 deletions(-) > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH manager v8 1/2] fix #4849: api: download to storage: automatically dectect and configure compression
Am 27/09/2023 um 10:57 schrieb Philipp Hufnagl: > On 9/27/23 10:33, Thomas Lamprecht wrote: >> FYI, I have a pretty much done patch for this already here, so if >> you did not have anything (close to) already finished, I could push >> that out – just mentioning to avoid potential duplicate work. > > > I have nothing close. Sorry for the inconvenience and thank you for > fixing. OK, I now applied below change for doing this in the frontend and reverted the API change. FWIW, this is not 100% semantically the same, as I now try to match the file-extension with the case-insensitive flag to allow matching a, e.g., ISO.GZ too. 8< diff --git a/www/manager6/window/DownloadUrlToStorage.js b/www/manager6/window/DownloadUrlToStorage.js index 481cb2ed..335d6aa6 100644 --- a/www/manager6/window/DownloadUrlToStorage.js +++ b/www/manager6/window/DownloadUrlToStorage.js @@ -66,7 +66,6 @@ Ext.define('PVE.window.DownloadUrlToStorage', { params: { url: queryParam.url, 'verify-certificates': queryParam['verify-certificates'], - 'detect-compression': view.content === 'iso' ? 1 : 0, }, waitMsgTarget: view, failure: res => { @@ -81,11 +80,22 @@ Ext.define('PVE.window.DownloadUrlToStorage', { urlField.validate(); let data = res.result.data; + + let filename = data.filename || ""; + let compression = '__default__'; + if (view.content === 'iso') { + const matches = filename.match(/^(.+)\.(gz|lzo|zst)$/i); + if (matches) { + filename = matches[1]; + compression = matches[2]; + } + } + view.setValues({ - filename: data.filename || "", + filename, + compression, size: (data.size && Proxmox.Utils.format_size(data.size)) || gettext("Unknown"), mimetype: data.mimetype || gettext("Unknown"), - compression: data.compression || '__default__', }); }, }); ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied-series: [PATCH v3 stable-7+master manager 1/3] ui: vm selector: gracefully handle empty IDs in setValue function
Am 25/09/2023 um 13:58 schrieb Fiona Ebner: > An empty string is passed by the backup job window when using > selection mode 'all', would be converted to [""] and wrongly add an > entry with VMID 0 because the item "" could not be found in the store. > > Reported in the community forum: > https://forum.proxmox.com/threads/130164/ > > Fixes: 7a5ca76a ("fix #4239: ui: show selected but non-existing vmids in > backup edit") > Suggested-by: Dominik Csapak > Signed-off-by: Fiona Ebner > --- > > It is enough to apply this or the second patch to stable-7 to fix the > issue. > > Changes in v3: > * use filter function to handle more general cases like "100,,200" > and not just the empty string. > > www/manager6/form/VMSelector.js | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > applied series and the first one also to stable-7, thanks! ps. you got my OK for applying your own patches if there was positive review/testing feedback. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH v2 stable-7 manager] pve7to8: Add check for dkms modules
Am 01/08/2023 um 10:42 schrieb Christian Ebner: > ... and warn if at least one is present. > > Signed-off-by: Christian Ebner > --- > > changes since v1: > * do not use which to check for dkms, use exit code directly > > PVE/CLI/pve7to8.pm | 22 ++ > 1 file changed, 22 insertions(+) > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 qemu-server 0/2] remote-migration: migration with different cpu
This patch series allow remote migration between cluster with different cpu model. 2 new params are introduced: "target-cpu" && "target-reboot" If target-cpu is defined, this will replace the cpu model of the target vm. If vm is online/running, an extra "target-reboot" safeguard option is needed. Indeed, as the target cpu is different, the live migration with memory transfert is skipped (as anyway, the target will die with a different cpu). Then, after the storage copy, we switch source vm disk to the targetvm nbd export, then shutdown the source vm and restart the target vm. (Like a virtual reboot between source/target) Changelog v2: The first version was simply shuting down the target vm, wihout doing the block-job-complete. After doing production migration with around 400vms, I had some fs corruption, like some datas was still in buffer. This v2 has been tested with another 400vms batch, without any corruption. Changelog v3: v2 was not perfect, still have some 1 or 2 fs corruption with vms doing a lot of write. This v3 retake idea of the v1 but in a cleaner way - we migrate disk to target vm - source vm is switching disk to the nbd of the target vm. (with a block-job-complete, and not a block-job-cancel with standard disk migration). We are 100% sure it that no pending write is still pending in the migration job. - source vm is shutdown - target with is restart We have redone a lot of migration this summer( maybe another 4000vm), 0 corruption, windows or linux guest vms. Alexandre Derumier (2): migration: move livemigration code in a dedicated sub remote-migration: add target-cpu && target-reboot params PVE/API2/Qemu.pm | 28 ++- PVE/CLI/qm.pm | 11 ++ PVE/QemuMigrate.pm | 451 - 3 files changed, 281 insertions(+), 209 deletions(-) -- 2.39.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 qemu-server 1/2] migration: move livemigration code in a dedicated sub
--- PVE/QemuMigrate.pm | 420 +++-- 1 file changed, 214 insertions(+), 206 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index f41c61f..5ea78a7 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -726,6 +726,219 @@ sub cleanup_bitmaps { } } +sub live_migration { +my ($self, $vmid, $migrate_uri, $spice_port) = @_; + +my $conf = $self->{vmconf}; + +$self->log('info', "starting online/live migration on $migrate_uri"); +$self->{livemigration} = 1; + +# load_defaults +my $defaults = PVE::QemuServer::load_defaults(); + +$self->log('info', "set migration capabilities"); +eval { PVE::QemuServer::set_migration_caps($vmid) }; +warn $@ if $@; + +my $qemu_migrate_params = {}; + +# migrate speed can be set via bwlimit (datacenter.cfg and API) and via the +# migrate_speed parameter in qm.conf - take the lower of the two. +my $bwlimit = $self->get_bwlimit(); + +my $migrate_speed = $conf->{migrate_speed} // 0; +$migrate_speed *= 1024; # migrate_speed is in MB/s, bwlimit in KB/s + +if ($bwlimit && $migrate_speed) { + $migrate_speed = ($bwlimit < $migrate_speed) ? $bwlimit : $migrate_speed; +} else { + $migrate_speed ||= $bwlimit; +} +$migrate_speed ||= ($defaults->{migrate_speed} || 0) * 1024; + +if ($migrate_speed) { + $migrate_speed *= 1024; # qmp takes migrate_speed in B/s. + $self->log('info', "migration speed limit: ". render_bytes($migrate_speed, 1) ."/s"); +} else { + # always set migrate speed as QEMU default to 128 MiBps == 1 Gbps, use 16 GiBps == 128 Gbps + $migrate_speed = (16 << 30); +} +$qemu_migrate_params->{'max-bandwidth'} = int($migrate_speed); + +my $migrate_downtime = $defaults->{migrate_downtime}; +$migrate_downtime = $conf->{migrate_downtime} if defined($conf->{migrate_downtime}); +# migrate-set-parameters expects limit in ms +$migrate_downtime *= 1000; +$self->log('info', "migration downtime limit: $migrate_downtime ms"); +$qemu_migrate_params->{'downtime-limit'} = int($migrate_downtime); + +# set cachesize to 10% of the total memory +my $memory = get_current_memory($conf->{memory}); +my $cachesize = int($memory * 1048576 / 10); +$cachesize = round_powerof2($cachesize); + +$self->log('info', "migration cachesize: " . render_bytes($cachesize, 1)); +$qemu_migrate_params->{'xbzrle-cache-size'} = int($cachesize); + +$self->log('info', "set migration parameters"); +eval { + mon_cmd($vmid, "migrate-set-parameters", %{$qemu_migrate_params}); +}; +$self->log('info', "migrate-set-parameters error: $@") if $@; + +if (PVE::QemuServer::vga_conf_has_spice($conf->{vga}) && !$self->{opts}->{remote}) { + my $rpcenv = PVE::RPCEnvironment::get(); + my $authuser = $rpcenv->get_user(); + + my (undef, $proxyticket) = PVE::AccessControl::assemble_spice_ticket($authuser, $vmid, $self->{node}); + + my $filename = "/etc/pve/nodes/$self->{node}/pve-ssl.pem"; + my $subject = PVE::AccessControl::read_x509_subject_spice($filename); + + $self->log('info', "spice client_migrate_info"); + + eval { + mon_cmd($vmid, "client_migrate_info", protocol => 'spice', + hostname => $proxyticket, 'port' => 0, 'tls-port' => $spice_port, + 'cert-subject' => $subject); + }; + $self->log('info', "client_migrate_info error: $@") if $@; + +} + +my $start = time(); + +$self->log('info', "start migrate command to $migrate_uri"); +eval { + mon_cmd($vmid, "migrate", uri => $migrate_uri); +}; +my $merr = $@; +$self->log('info', "migrate uri => $migrate_uri failed: $merr") if $merr; + +my $last_mem_transferred = 0; +my $usleep = 100; +my $i = 0; +my $err_count = 0; +my $lastrem = undef; +my $downtimecounter = 0; +while (1) { + $i++; + my $avglstat = $last_mem_transferred ? $last_mem_transferred / $i : 0; + + usleep($usleep); + + my $stat = eval { mon_cmd($vmid, "query-migrate") }; + if (my $err = $@) { + $err_count++; + warn "query migrate failed: $err\n"; + $self->log('info', "query migrate failed: $err"); + if ($err_count <= 5) { + usleep(1_000_000); + next; + } + die "too many query migrate failures - aborting\n"; + } + + my $status = $stat->{status}; + if (defined($status) && $status =~ m/^(setup)$/im) { + sleep(1); + next; + } + + if (!defined($status) || $status !~ m/^(active|completed|failed|cancelled)$/im) { + die $merr if $merr; + die "unable to parse migration status '$status' - aborting\n"; + } + $merr = undef; + $err_count = 0; + + my
[pve-devel] [PATCH v3 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params
This patch add support for remote migration when target cpu model is different. target-reboot param need to be defined to allow migration whens source vm is online. When defined, only the live storage migration is done, and instead to transfert memory, we cleanly shutdown source vm and restart the target vm. (like a virtual reboot between source/dest) --- PVE/API2/Qemu.pm | 28 +++- PVE/CLI/qm.pm | 11 +++ PVE/QemuMigrate.pm | 31 +-- 3 files changed, 67 insertions(+), 3 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 774b0c7..e1cefba 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -4586,6 +4586,17 @@ __PACKAGE__->register_method({ optional => 1, default => 0, }, + 'target-cpu' => { + optional => 1, + description => "Target Emulated CPU model. For online migration, this require target-reboot option", + type => 'string', + format => 'pve-vm-cpu-conf', + }, + 'target-reboot' => { + type => 'boolean', + description => "For online migration , don't migrate memory, only storage. Then, the source vm is shutdown and the target vm is restarted.", + optional => 1, + }, 'target-storage' => get_standard_option('pve-targetstorage', { completion => \&PVE::QemuServer::complete_migration_storage, optional => 0, @@ -4664,9 +4675,12 @@ __PACKAGE__->register_method({ my $is_replicated = $repl_conf->check_for_existing_jobs($source_vmid, 1); die "cannot remote-migrate replicated VM\n" if $is_replicated; + my $target_cpu = extract_param($param, 'target-cpu'); + my $target_reboot = extract_param($param, 'target-reboot'); + if (PVE::QemuServer::check_running($source_vmid)) { die "can't migrate running VM without --online\n" if !$param->{online}; - + die "can't migrate running VM without --target-reboot when target cpu is different" if $target_cpu && !$target_reboot; } else { warn "VM isn't running. Doing offline migration instead.\n" if $param->{online}; $param->{online} = 0; @@ -4683,11 +4697,14 @@ __PACKAGE__->register_method({ raise_param_exc({ 'target-bridge' => "failed to parse bridge map: $@" }) if $@; + die "remote migration requires explicit storage mapping!\n" if $storagemap->{identity}; $param->{storagemap} = $storagemap; $param->{bridgemap} = $bridgemap; + $param->{targetcpu} = $target_cpu; + $param->{targetreboot} = $target_reboot; $param->{remote} = { conn => $conn_args, # re-use fingerprint for tunnel client => $api_client, @@ -5732,6 +5749,15 @@ __PACKAGE__->register_method({ PVE::QemuServer::nbd_stop($state->{vmid}); return; }, + 'restart' => sub { + PVE::QemuServer::vm_stop(undef, $state->{vmid}, 1, 1); + my $info = PVE::QemuServer::vm_start_nolock( + $state->{storecfg}, + $state->{vmid}, + $state->{conf}, + ); + return; + }, 'resume' => sub { if (PVE::QemuServer::Helpers::vm_running_locally($state->{vmid})) { PVE::QemuServer::vm_resume($state->{vmid}, 1, 1); diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm index b17b4fe..9d89cfe 100755 --- a/PVE/CLI/qm.pm +++ b/PVE/CLI/qm.pm @@ -189,6 +189,17 @@ __PACKAGE__->register_method({ optional => 1, default => 0, }, + 'target-cpu' => { + optional => 1, + description => "Target Emulated CPU model. For online migration, this require target-reboot option", + type => 'string', + format => 'pve-vm-cpu-conf', + }, + 'target-reboot' => { + type => 'boolean', + description => "For online migration , don't migrate memory, only storage. Then, the source vm is shutdown and the target vm is restarted.", + optional => 1, + }, 'target-storage' => get_standard_option('pve-targetstorage', { completion => \&PVE::QemuServer::complete_migration_storage, optional => 0, diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 5ea78a7..0eaa73d 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -729,6 +729,11 @@ sub cleanup_bitmaps { sub live_migration { my ($self, $vmid, $migrate_uri, $spice_port) = @_; +if($self->{opts}->{targetreboot}){ + $self->log('info', "target reboot - skip live migration."); + return; +} + my $conf = $self->{vmc