[pve-devel] [PATCH pve-container 2/3] fix #3903: api2: remove vmid from jobs.cfg
... on destroy if 'purge' is selected Signed-off-by: Hannes Laimer --- src/PVE/API2/LXC.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index 84712f7..2e4146e 100644 --- a/src/PVE/API2/LXC.pm +++ b/src/PVE/API2/LXC.pm @@ -758,6 +758,7 @@ __PACKAGE__->register_method({ print "purging CT $vmid from related configurations..\n"; PVE::ReplicationConfig::remove_vmid_jobs($vmid); PVE::VZDump::Plugin::remove_vmid_from_backup_jobs($vmid); + PVE::Jobs::Plugin::remove_vmid_from_jobs($vmid); if ($ha_managed) { PVE::HA::Config::delete_service_from_config("ct:$vmid"); -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu-server 3/3] fix #3903: api2: remove vmid from jobs.cfg
... on destroy if 'purge' is selected Signed-off-by: Hannes Laimer --- PVE/API2/Qemu.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 9be1caf..f100d2c 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -1696,6 +1696,7 @@ __PACKAGE__->register_method({ print "purging VM $vmid from related configurations..\n"; PVE::ReplicationConfig::remove_vmid_jobs($vmid); PVE::VZDump::Plugin::remove_vmid_from_backup_jobs($vmid); + PVE::Jobs::Plugin::remove_vmid_from_jobs($vmid); if ($ha_managed) { PVE::HA::Config::delete_service_from_config("vm:$vmid"); -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH-SERIES] fix #3903: remove vmid from jobs.cfg on destroy
... if 'purge'. pve-manager: Hannes Laimer (1): fix #3903: jobs-plugin: add remove vmid from jobs helper PVE/Jobs/Plugin.pm | 19 ++- 1 file changed, 18 insertions(+), 1 deletion(-) pve-conatiner: Hannes Laimer (1): fix #3903: api2: remove vmid from jobs.cfg src/PVE/API2/LXC.pm | 1 + 1 file changed, 1 insertion(+) qemu-server: Hannes Laimer (1): fix #3903: api2: remove vmid from jobs.cfg PVE/API2/Qemu.pm | 1 + 1 file changed, 1 insertion(+) -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-manager 1/3] fix #3903: jobs-plugin: add remove vmid from jobs helper
Signed-off-by: Hannes Laimer --- PVE/Jobs/Plugin.pm | 19 ++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/PVE/Jobs/Plugin.pm b/PVE/Jobs/Plugin.pm index 6098360b..4883a193 100644 --- a/PVE/Jobs/Plugin.pm +++ b/PVE/Jobs/Plugin.pm @@ -3,7 +3,7 @@ package PVE::Jobs::Plugin; use strict; use warnings; -use PVE::Cluster qw(cfs_register_file); +use PVE::Cluster qw(cfs_register_file cfs_lock_file cfs_read_file cfs_write_file); use base qw(PVE::SectionConfig); @@ -92,6 +92,23 @@ sub write_config { $class->SUPER::write_config($filename, $cfg); } +sub remove_vmid_from_jobs { +my ($vmid) = @_; +cfs_lock_file('jobs.cfg', undef, sub { + my $jobs_data = cfs_read_file('jobs.cfg'); + while((my $id, my $job) = each (%{$jobs_data->{ids}})){ + next if !defined($job->{vmid}); + $job->{vmid} = join(',', grep { $_ ne $vmid } PVE::Tools::split_list($job->{vmid})); + if ($job->{vmid} eq '') { + delete $jobs_data->{ids}->{$id}; + } else { + $jobs_data->{ids}->{$id} = $job; + } + } + cfs_write_file('jobs.cfg', $jobs_data); +}); +} + sub run { my ($class, $cfg) = @_; # implement in subclass -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH guest-common] GuestHelpers: fix snapshot indentation length
On 28.02.22 15:46, Dominik Csapak wrote: > if a user has many snapshots, the length goes negative and produces > wrong indentation, so clamp it at 0 > > reported by a user in the forum: > https://forum.proxmox.com/threads/non-threaded-listsnaphost-view.105740/ > > Signed-off-by: Dominik Csapak > --- > for many snapshots this still looks weird, but has a consistent indentation. > to do it completely right, we'd have to iterate twice and find the > longest line first and use that as width for the first column. not > sure if worth the effort. yeah I think that for now it's OK to not (La)Tex level features here, albeit it'd not be *that* hard. FWIW, I reduced the space indentation per level from two to one, looks better that way and allows for more snapshots to be displayed correctly before clamping kicks in. > > src/PVE/GuestHelpers.pm | 1 + > 1 file changed, 1 insertion(+) > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu-server 1/1] enable balloon free-page-reporting
Allow balloon device driver to report hints of guest free pages to the host, for auto memory reclaim https://lwn.net/Articles/759413/ https://events19.linuxfoundation.org/wp-content/uploads/2017/12/KVMForum2018.pdf Signed-off-by: Alexandre Derumier --- PVE/QemuServer.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 42f0fbd..bb44e58 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -3846,7 +3846,7 @@ sub config_to_command { # enable balloon by default, unless explicitly disabled if (!defined($conf->{balloon}) || $conf->{balloon}) { my $pciaddr = print_pci_addr("balloon0", $bridges, $arch, $machine_type); - push @$devices, '-device', "virtio-balloon-pci,id=balloon0$pciaddr"; + push @$devices, '-device', "virtio-balloon-pci,free-page-reporting=on,id=balloon0$pciaddr"; } if ($conf->{watchdog}) { -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu-server 0/1] enable balloon free-page-reporting
Hi, Currently, if a guest vm allocate a memory page, and freed it later in the guest, the memory is not freed on the host side. Balloon device have a new option since qemu 5.1 "free-page-reporting" (and need host kernel 5.7) https://events19.linuxfoundation.org/wp-content/uploads/2017/12/KVMForum2018.pdf https://lwn.net/Articles/759413/ This is working like the discard option for disk, memory is freed async by the host when vm is freeing it. I'm running it production since 1 month without any problem. With a lot of vms and spiky workload, the memory freed is really huge. This patch enabled it by default, This don't break live migration from old qemu process with non-enable free page hinting --> new qemu process with enabled free page hinting, but this break migration in the reverse way. So I can enabled it by default for next qemu machine version or add an extra option to enable it. I don't known if we could extend the "baloon" option ? or add an extra option like "balloonoptions: ..." ? What do you think about it ? Alexandre Derumier (1): enable balloon free-page-reporting PVE/QemuServer.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel