[pve-devel] [PATCH/RFC qemu] backup: add patch to initialize bcs bitmap early enough for PBS

2022-03-02 Thread Fabian Ebner
This is necessary for multi-disk backups where not all jobs are
immediately started after they are created. QEMU commit
06e0a9c16405c0a4c1eca33cf286cc04c42066a2 did already part of the work,
ensuring that new writes after job creation don't pass through to the
backup, but not yet for the MIRROR_SYNC_MODE_BITMAP case which is used
for PBS.

Signed-off-by: Fabian Ebner 
---
 ...e-bcs-bitmap-initialization-to-job-c.patch | 58 +++
 ...E-Backup-add-vma-backup-format-code.patch} |  0
 ...Backup-add-backup-dump-block-driver.patch} |  4 +-
 ...kup-proxmox-backup-patches-for-qemu.patch} |  0
 ...store-new-command-to-restore-from-p.patch} |  0
 ...rty-bitmap-tracking-for-incremental.patch} |  0
 ...patch => 0031-PVE-various-PBS-fixes.patch} |  0
 ...-driver-to-map-backup-archives-into.patch} |  0
 ...d-query_proxmox_support-QMP-command.patch} |  0
 ...-add-query-pbs-bitmap-info-QMP-call.patch} |  0
 ...t-stderr-to-journal-when-daemonized.patch} |  0
 ...-sequential-job-transaction-support.patch} |  0
 ...transaction-to-synchronize-job-stat.patch} |  0
 ...block-on-finishing-and-cleanup-crea.patch} |  0
 ...grate-dirty-bitmap-state-via-savevm.patch} |  0
 ...irty-bitmap-migrate-other-bitmaps-e.patch} |  0
 ...ll-back-to-open-iscsi-initiatorname.patch} |  0
 ...outine-QMP-for-backup-cancel_backup.patch} |  0
 ... => 0043-PBS-add-master-key-support.patch} |  0
 ...t-path-reads-without-allocation-if-.patch} |  0
 ...VE-block-stream-increase-chunk-size.patch} |  0
 ...ccept-NULL-qiov-in-bdrv_pad_request.patch} |  0
 ...> 0047-block-add-alloc-track-driver.patch} |  0
 ...alid-QAPI-names-for-backwards-compa.patch} |  0
 ...register-yank-before-migration_inco.patch} |  0
 ...add-l-option-for-loading-a-snapshot.patch} |  0
 debian/patches/series | 51 
 27 files changed, 86 insertions(+), 27 deletions(-)
 create mode 100644 
debian/patches/pve/0025-block-backup-move-bcs-bitmap-initialization-to-job-c.patch
 rename debian/patches/pve/{0025-PVE-Backup-add-vma-backup-format-code.patch => 
0026-PVE-Backup-add-vma-backup-format-code.patch} (100%)
 rename debian/patches/pve/{0026-PVE-Backup-add-backup-dump-block-driver.patch 
=> 0027-PVE-Backup-add-backup-dump-block-driver.patch} (98%)
 rename 
debian/patches/pve/{0027-PVE-Backup-proxmox-backup-patches-for-qemu.patch => 
0028-PVE-Backup-proxmox-backup-patches-for-qemu.patch} (100%)
 rename 
debian/patches/pve/{0028-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
 => 0029-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch} (100%)
 rename 
debian/patches/pve/{0029-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
 => 0030-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch} (100%)
 rename debian/patches/pve/{0030-PVE-various-PBS-fixes.patch => 
0031-PVE-various-PBS-fixes.patch} (100%)
 rename 
debian/patches/pve/{0031-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
 => 0032-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch} (100%)
 rename 
debian/patches/pve/{0032-PVE-add-query_proxmox_support-QMP-command.patch => 
0033-PVE-add-query_proxmox_support-QMP-command.patch} (100%)
 rename debian/patches/pve/{0033-PVE-add-query-pbs-bitmap-info-QMP-call.patch 
=> 0034-PVE-add-query-pbs-bitmap-info-QMP-call.patch} (100%)
 rename 
debian/patches/pve/{0034-PVE-redirect-stderr-to-journal-when-daemonized.patch 
=> 0035-PVE-redirect-stderr-to-journal-when-daemonized.patch} (100%)
 rename 
debian/patches/pve/{0035-PVE-Add-sequential-job-transaction-support.patch => 
0036-PVE-Add-sequential-job-transaction-support.patch} (100%)
 rename 
debian/patches/pve/{0036-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch
 => 0037-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch} (100%)
 rename 
debian/patches/pve/{0037-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch
 => 0038-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch} (100%)
 rename 
debian/patches/pve/{0038-PVE-Migrate-dirty-bitmap-state-via-savevm.patch => 
0039-PVE-Migrate-dirty-bitmap-state-via-savevm.patch} (100%)
 rename 
debian/patches/pve/{0039-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch
 => 0040-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch} (100%)
 rename 
debian/patches/pve/{0040-PVE-fall-back-to-open-iscsi-initiatorname.patch => 
0041-PVE-fall-back-to-open-iscsi-initiatorname.patch} (100%)
 rename 
debian/patches/pve/{0041-PVE-Use-coroutine-QMP-for-backup-cancel_backup.patch 
=> 0042-PVE-Use-coroutine-QMP-for-backup-cancel_backup.patch} (100%)
 rename debian/patches/pve/{0042-PBS-add-master-key-support.patch => 
0043-PBS-add-master-key-support.patch} (100%)
 rename 
debian/patches/pve/{0043-PVE-block-pbs-fast-path-reads-without-allocation-if-.patch
 => 0044-PVE-block-pbs-fast-path-reads-without-allocation-if-.patch} (100%)
 rename debian/patches/pve/{0044-PVE-block-stream-increase-chunk-size.patch => 
0045-PVE-block-stream-increase-chunk-size.patch} (100%)
 rename 
d

Re: [pve-devel] [PATCH pve-manager 1/3] fix #3903: jobs-plugin: add remove vmid from jobs helper

2022-03-02 Thread Fabian Ebner
Am 01.03.22 um 09:51 schrieb Hannes Laimer:
> Signed-off-by: Hannes Laimer 
> ---
>  PVE/Jobs/Plugin.pm | 19 ++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/Jobs/Plugin.pm b/PVE/Jobs/Plugin.pm
> index 6098360b..4883a193 100644
> --- a/PVE/Jobs/Plugin.pm
> +++ b/PVE/Jobs/Plugin.pm
> @@ -3,7 +3,7 @@ package PVE::Jobs::Plugin;
>  use strict;
>  use warnings;
>  
> -use PVE::Cluster qw(cfs_register_file);
> +use PVE::Cluster qw(cfs_register_file cfs_lock_file cfs_read_file 
> cfs_write_file);
>  
>  use base qw(PVE::SectionConfig);
>  
> @@ -92,6 +92,23 @@ sub write_config {
>  $class->SUPER::write_config($filename, $cfg);
>  }
>  
> +sub remove_vmid_from_jobs {

Since this is iterating over jobs of all kinds and not something that
should be overridden by derived plugins, it should rather be part of
Jobs.pm.

If, in the future, plugins ever need more fine-grained control, we can
always add a helper method to remove a vmid from a single job and adapt
this function to call the helper method from the appropriate plugin for
each job.

> +my ($vmid) = @_;

Style nit: we usually put a blank line after the parameter assignment.

> +cfs_lock_file('jobs.cfg', undef, sub {
> + my $jobs_data = cfs_read_file('jobs.cfg');
> + while((my $id, my $job) = each (%{$jobs_data->{ids}})){


Style nits:
* We don't really use 'each'. One reason is that the state of the
iterator is not reset when returning early. Not relevant here, but it's
just not nice behavior.
* Missing spaces before/after outermost parentheses

> + next if !defined($job->{vmid});
> + $job->{vmid} = join(',', grep { $_ ne $vmid } 
> PVE::Tools::split_list($job->{vmid}));
> + if ($job->{vmid} eq '') {
> + delete $jobs_data->{ids}->{$id};
> + } else {
> + $jobs_data->{ids}->{$id} = $job;

Isn't this just assigning the same reference again?

> + }
> + }
> + cfs_write_file('jobs.cfg', $jobs_data);
> +});
> +}
> +
>  sub run {
>  my ($class, $cfg) = @_;
>  # implement in subclass


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] ui: osd: send in/out cmd to currently used node

2022-03-02 Thread Aaron Lauterer
The in & out commands for OSDs are not node specific and can be run on
any node in the Ceph cluster. By sending them to the node currently used
to access the UI they can still be sent even if the node on which the
OSDs are located is down.

This helps in a disaster scenario where a node is down. By default Ceph
will mark a downed OSD as out after 10 minutes. This could be too long
in some situations. Running the CLI command to mark the OSD as out
earlier on one of the remaining nodes does work, but if the admin is not
used doing it this way, this adds stress, in a potentially already
stressful situation.

Signed-off-by: Aaron Lauterer 
---
 www/manager6/ceph/OSD.js | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index e126f8d0..78f226ff 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -391,8 +391,13 @@ Ext.define('PVE.node.CephOsdTree', {
let osdid = vm.get('osdid');
 
let doRequest = function() {
+   let targetnode = vm.get('osdhost');
+   // cmds not node specific and need to work if the OSD node is 
down
+   if (['in', 'out'].includes(cmd)) {
+   targetnode = vm.get('nodename');
+   }
Proxmox.Utils.API2Request({
-   url: "/nodes/" + vm.get('osdhost') + "/ceph/osd/" + osdid + 
'/' + cmd,
+   url: `/nodes/${targetnode}/ceph/osd/${osdid}/${cmd}`,
waitMsgTarget: me.getView(),
method: 'POST',
params: params,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-container 2/3] fix #3903: api2: remove vmid from jobs.cfg

2022-03-02 Thread Fabian Ebner
Am 01.03.22 um 09:51 schrieb Hannes Laimer:
> ... on destroy if 'purge' is selected
> 
> Signed-off-by: Hannes Laimer 
> ---
>  src/PVE/API2/LXC.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
> index 84712f7..2e4146e 100644
> --- a/src/PVE/API2/LXC.pm
> +++ b/src/PVE/API2/LXC.pm
> @@ -758,6 +758,7 @@ __PACKAGE__->register_method({
>   print "purging CT $vmid from related configurations..\n";
>   PVE::ReplicationConfig::remove_vmid_jobs($vmid);
>   PVE::VZDump::Plugin::remove_vmid_from_backup_jobs($vmid);
> + PVE::Jobs::Plugin::remove_vmid_from_jobs($vmid);

Should add a
use PVE::Jobs::Plugin;
(or PVE::Jobs if the function is moved there) to the imports.

Same for the next patch.

>  
>   if ($ha_managed) {
>   PVE::HA::Config::delete_service_from_config("ct:$vmid");


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] qmp client: increase timeout for thaw

2022-03-02 Thread Fabian Ebner
Using a loop of freeze, sleep 5, thaw, sleep 5, an idling Windows 11
VM with 4 cores and 8GiB RAM once took 54 seconds for thawing. It took
less than a second about 90% of the time and maximum of a few seconds
for the majortiy of other cases, but there can be outliers where 10
seconds is not enough.

And there can be hookscripts executed upon thaw, which might also not
complete instantly.

Signed-off-by: Fabian Ebner 
---
 PVE/QMPClient.pm | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/PVE/QMPClient.pm b/PVE/QMPClient.pm
index ea4dc0b..8af28e8 100644
--- a/PVE/QMPClient.pm
+++ b/PVE/QMPClient.pm
@@ -113,9 +113,10 @@ sub cmd {
# locked state with high probability, so use an generous timeout
$timeout = 60*60; # 1 hour
} elsif ($cmd->{execute} eq 'guest-fsfreeze-thaw') {
-   # thaw has no possible long blocking actions, either it returns
-   # instantly or never (dead locked)
-   $timeout = 10;
+   # While it should return instantly or never (dead locked) for Linux 
guests,
+   # the variance for Windows guests can be big. And there might be 
hook scripts
+   # that are executed upon thaw, so use 3 minutes to be on the safe 
side.
+   $timeout = 3 * 60;
} elsif ($cmd->{execute} eq 'savevm-start' ||
 $cmd->{execute} eq 'savevm-end' ||
 $cmd->{execute} eq 'query-backup' ||
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-container 2/3] fix #3903: api2: remove vmid from jobs.cfg

2022-03-02 Thread Hannes Laimer

Am 02.03.22 um 11:16 schrieb Fabian Ebner:

Am 01.03.22 um 09:51 schrieb Hannes Laimer:

... on destroy if 'purge' is selected

Signed-off-by: Hannes Laimer 
---
  src/PVE/API2/LXC.pm | 1 +
  1 file changed, 1 insertion(+)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 84712f7..2e4146e 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -758,6 +758,7 @@ __PACKAGE__->register_method({
print "purging CT $vmid from related configurations..\n";
PVE::ReplicationConfig::remove_vmid_jobs($vmid);
PVE::VZDump::Plugin::remove_vmid_from_backup_jobs($vmid);
+   PVE::Jobs::Plugin::remove_vmid_from_jobs($vmid);


Should add a
 use PVE::Jobs::Plugin;
(or PVE::Jobs if the function is moved there) to the imports.
The reason I did not do that in the first place is that it is only used 
once in the whole file and I felt like I would make an already quite 
large import section even bigger. Should the previous two lines also use 
use? Do we have some kind of policy for when and when not to use use?


Same for the next patch.

  
  		if ($ha_managed) {

PVE::HA::Config::delete_service_from_config("ct:$vmid");



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel