[pve-devel] [PATCH manager] fix #5734: provide missing methods for Proxmox.Utils for mobile ui

2024-09-23 Thread Dominik Csapak
since the mobile ui shares the Utils code with the desktop web ui, (but
not the proxmox-widget-toolkit) all methods used in constructors, etc.
there must be available in the mobile ui too.

We don't have any notification configuration options in the mobile ui,
and AFAIK we don't plan to add those there, so we can just implement
stub functions. This way the Utils constructor can proceed without
errors, fixing loading the mobile ui.

Signed-off-by: Dominik Csapak 
---
 www/mobile/WidgetToolkitUtils.js | 9 +
 1 file changed, 9 insertions(+)

diff --git a/www/mobile/WidgetToolkitUtils.js b/www/mobile/WidgetToolkitUtils.js
index b292fcd5..ea710faf 100644
--- a/www/mobile/WidgetToolkitUtils.js
+++ b/www/mobile/WidgetToolkitUtils.js
@@ -586,6 +586,15 @@ utilities: {
}
 },
 
+overrideNotificationFieldName: function(extra) {
+   // do nothing, we don't have notification configuration in mobile ui
+},
+
+overrideNotificationFieldValue: function(extra) {
+   // do nothing, we don't have notification configuration in mobile ui
+},
+
+
 format_task_description: function(type, id) {
let farray = Proxmox.Utils.task_desc_table[type];
let text;
-- 
2.39.5



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse

2024-09-23 Thread Dominik Csapak

On 9/19/24 16:57, Thomas Lamprecht wrote:

Am 19/09/2024 um 14:45 schrieb Dominik Csapak:

On 9/19/24 14:01, Thomas Lamprecht wrote:

Am 19/09/2024 um 11:52 schrieb Dominik Csapak:

by default libfuse2 limits writes to 4k size, which means that on writes
bigger than that, we do a whole write cycle for each 4k block that comes
in. To avoid that, add the option 'big_writes' to allow writes bigger
than 4k at once.

This should improve pmxcfs performance for situations where we often
write large files (e.g. big ha status) and maybe reduce writes to disk.


Should? Something like before/after for benchmark numbers, flamegraphs
would be really good to have, without those it's rather hard to discuss
this, and I'd like to avoid having to do those, or check the inner workings
of the affected fuse userspace/kernel code paths here myself.


well I mean the code change is relatively small and the result is rather clear:


Well sure the code change is just setting an option... But the actual change is
abstracted away and would benefit from actually looking into..


in the current case we have the following calls from pmxcfs (shortened for 
e-mail)
when writing a single 128k block:
(dd if=... of=/etc/pve/test bs=128k count=1)


Better than nothing but still no actual numbers (reduced time, reduced write amp
in combination with sqlite, ...), some basic analysis over file/write size 
distribution
on a single node and (e.g. three node) cluster, ...
If that's all obvious for you then great, but as already mentioned in the past, 
I
want actual data in commit messages for such stuff, and I cannot really see a 
downside
of having such numbers.

Again, as is I'm not really seeing what's to discuss, you send it as RFC after
all.


[...]
so a factor of 32 less calls to cfs_fuse_write (including memdb_pwrite)


That can be huge or not so big at all, i.e. as mentioned above, it would we 
good to
measure the impact through some other metrics.

And FWIW, I used bpftrace to count [0] with an unpatched pmxcfs, there I get
the 32 calls to cfs_fuse_write only for a new file, overwriting the existing
file again with the same amount of data (128k) just causes a single call.
I tried using more data (e.g. from 128k initially to 256k or 512k) and it's
always the data divided by 128k (even if the first file has a different size)

We do not override existing files often, but rather write to a new file and
then rename, but still quite interesting and IMO really showing that just
because this is 1 +-1 line change it doesn't necessarily have to be trivial
and obvious in its effects.

[0]: bpftrace -e 'u:cfs_fuse_write /str(args->path) == "/test"/ {@ = count();} END { 
print(@) }' -p "$(pidof pmxcfs)"



If we'd change to libfuse3, this would be a non-issue, since that option
got removed and is the default there.


I'd prefer that. At least if done with the future PVE 9.0, as I do not think
it's a good idea in the middle of a stable release cycle.


why not this change now, and the rewrite to libfuse3 later? that way we can
have some improvements now too...


Because I want some actual data and reasoning first, even if it's quite likely
that this improves things Somehow™, I'd like to actually know in what metrics
and by how much (even if just an upper bound due to the benchmark or some
measurement being rather artificial).

I mean you name the big HA status, why not measure that for real? like, probably
among other things, in terms of bytes hitting the block layer, i.e. the actual
backing disk from those requests as then we'd know for real if this can reduce
the write load there, not just that it maybe should.



hi,

first i just wanted to say I'm sorry for my snarky comment about not needing to 
test
performance for such code. You're right, any insight we can gain there
is good and we (I!) should take the time to do that, even if the
change looks "obvious" like it does here

so i did some benchmarks (mostly disk writes) and wrote the short script below
(maybe we can reuse that?)

8<
use strict;
use warnings;

use PVE::Tools;

my $size = shift;

sub get_bytes_written {
my $fh = IO::File->new("/proc/diskstats", "r");
die if !$fh;
my $bytes = undef;
while (defined(my $line = <$fh>)) {
if ($line =~ m/sdb/) {
my @fields = split(/\s+/, $line);
$bytes = $fields[10] * 512;
}
}
return $bytes;
}

sub test_write {
my ($k) = @_;
system("rm /etc/pve/testfile");
my $data = "a"x($k*1024);
system("sync; echo -n 3> /proc/sys/vm/drop_caches");
my $bytes_before = get_bytes_written();
PVE::Tools::file_set_contents("/etc/pve/testfile", $data);
system("sync; echo -n 3> /proc/sys/vm/drop_caches");
my $bytes_after = get_bytes_written();
return $bytes_after - $bytes_before;
}

$size //= 128;

my $written = test_write($size) / 1024;
print("$written\n");
>8

t

Re: [pve-devel] [Veeam] Veeam change requests?

2024-09-23 Thread Pavel Tide via pve-devel
--- Begin Message ---
Hi Dominik,

For now the best course of action would be to post a message on our forums 
(forums.veeam.com) in this subsection:

https://forums.veeam.com/kvm-rhv-olvm-proxmox-f62/

I am in the process of arranging some external bug-tracker (we don't have one 
right now). If you have any preferences please let me know.

As for the question about how we work with QEMU - let me find an answer and I 
will get back to you shortly.

Thanks!


From: Andreas Neufert 
Sent: Thursday, September 19, 2024 10:40
To: Dominik Csapak; Proxmox VE development discussion; Pavel Tide
Subject: Re: [pve-devel] [Veeam] Veeam change requests?

Hi Dominik, thanks for your input and feedback.

I CC @Pavel Tide, who can answer the code 
questions.
For the support. Veeam customers can open a support case at 
https://support.veeam.com

I would like to ask the PVE group to do the same when you identify a bug and 
select the evaluation product choice from the dropdown.
Then please send me or Pavel the support ticket number so that we can route the 
ticket differently (out of evaluation support).
The advantage of using the Veeam support portal is that you can upload logs, 
and we can share fixes with you for testing or similar things.
To start the support ticket with the evaluation product selection, you need to 
create a free account on veeam.com

If nothing works, Pavel and I can help directly (send mail). We also monitor 
this list for Veeam issues and questions.

Best regards... Andreas

From: Dominik Csapak mailto:d.csa...@proxmox.com>>
Date: Thursday, 19. September 2024 at 09:59
To: Proxmox VE development discussion 
mailto:pve-devel@lists.proxmox.com>>
Cc: Andreas Neufert 
mailto:andreas.neuf...@veeam.com>>
Subject: Re: [pve-devel] [Veeam] Veeam change requests?
This is the first time you've received an email from this sender d.csapak @ 
proxmox.com, please exercise caution when clicking on links or opening 
attachments.

On 9/17/24 09:20, Andreas Neufert via pve-devel wrote:
>
> Hi Proxmox Dev team,
>

Hi,

> Tim Marx mentioned that you have some insights and change wishes for the 
> Veeam backup processing and that we should reach out to this list. We would 
> be happy to get this feedback here to be able to address it in our code or 
> join a call if this helps.

Thanks for reaching out!

During (very basic & short) testing, i discovered a few things that are 
problematic from our point
of view:

* During backup, there is often a longer running connection open to our QMP 
socket of running VMs
(/var/run/qemu-server/.qmp, where  is the vmid). This blocks our 
management stack from
doing certain tasks, like start/stop (probably does not matter during backup) 
but also
things like the VNC console, etc.

a better way would be to close the connections as soon as possible instead of 
keeping them
open. (Alternatively using our API/CLI could also be done, but I don't know what
exact QMP commands you're running)

if you absolutely need a longer running socket, please open a bug report on
https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugzilla.proxmox.com%2F&data=05%7C02%7CAndreas.Neufert%40veeam.com%7C32edd9e9d4034ac3e8f308dcd880fbd1%7Cba07baab431b49edadd7cbc3542f5140%7C1%7C0%7C638623295714596487%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=SF75C1yogf8n9EPMYzwv%2BSFRDx2Y0%2BSruqkgF%2FuDcQU%3D&reserved=0
 so we can discuss and track that there, how we could make
a socket available that is not used by our stack

* Another thing that I noticed was that it's not really visible if a backup is 
running
for a particular VM, so users might accidentally them down (or pause, etc.). 
Especially
i think it's bad if the VM is placed under a HA policy that has 'stopped' as 
target, as
that will try to stop the VM by itself. (Though this might be a configuration 
error in itself?)

A quick way to fix this would be to have a (custom) lock in our VMs. For longer 
running tasks
that block a guest, we have a line 'lock: ' in the config that prevents our 
stack
from doing most operations.

Putting that in would be a very short call to our perl code that locks the 
config locally
( `PVE::QemuConfig->lock_config($vmid, $updatefn) ), checks for existing locks,
updates the config with a new (custom) lock and writes it again.

Though i must admit, I'm not sure if custom locks outside of our defined ones 
would work,
but I'm sure we could add a 'custom' lock that you could use, should my 
mentioned
approach not work properly.

* Also, I noticed that when a guest is started from your stack, you modify the 
QEMU command line a
bit, namely removing some options that would be necessary to start the VM 
during the backup.
Is there a specific reason why you do it this way, instead of starting the VM 
through
our API/CLI?


A more general question last: What is the 

Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse

2024-09-23 Thread Dominik Csapak

On 9/23/24 14:00, Friedrich Weber wrote:

On 23/09/2024 11:17, Dominik Csapak wrote:

[...]
so i did some benchmarks (mostly disk writes) and wrote the short script
below
(maybe we can reuse that?)

8<
use strict;
use warnings;

use PVE::Tools;

my $size = shift;

sub get_bytes_written {
     my $fh = IO::File->new("/proc/diskstats", "r");
     die if !$fh;
     my $bytes = undef;
     while (defined(my $line = <$fh>)) {
     if ($line =~ m/sdb/) {
     my @fields = split(/\s+/, $line);
     $bytes = $fields[10] * 512;
     }
     }
     return $bytes;
}

sub test_write {
     my ($k) = @_;
     system("rm /etc/pve/testfile");
     my $data = "a"x($k*1024);
     system("sync; echo -n 3> /proc/sys/vm/drop_caches");


I'm not sure this actually drops the caches: Without the space between
`3` and `>` I think this redirects fd 3 to that file (so doesn't
actually write the `3`)? I didn't run the script though, so not sure if
it makes any difference for the results.


ah yeah, i noticed, but forgot to answer here. i fixed it here locally and
rerun the tests but had the same results (+- a bit of variation)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse

2024-09-23 Thread Friedrich Weber
On 23/09/2024 11:17, Dominik Csapak wrote:
> [...]
> so i did some benchmarks (mostly disk writes) and wrote the short script
> below
> (maybe we can reuse that?)
> 
> 8<
> use strict;
> use warnings;
> 
> use PVE::Tools;
> 
> my $size = shift;
> 
> sub get_bytes_written {
>     my $fh = IO::File->new("/proc/diskstats", "r");
>     die if !$fh;
>     my $bytes = undef;
>     while (defined(my $line = <$fh>)) {
>     if ($line =~ m/sdb/) {
>     my @fields = split(/\s+/, $line);
>     $bytes = $fields[10] * 512;
>     }
>     }
>     return $bytes;
> }
> 
> sub test_write {
>     my ($k) = @_;
>     system("rm /etc/pve/testfile");
>     my $data = "a"x($k*1024);
>     system("sync; echo -n 3> /proc/sys/vm/drop_caches");

I'm not sure this actually drops the caches: Without the space between
`3` and `>` I think this redirects fd 3 to that file (so doesn't
actually write the `3`)? I didn't run the script though, so not sure if
it makes any difference for the results.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: partially-applied: [PATCH many v9 00/13] notifications: notification metadata matching improvements

2024-09-23 Thread Thomas Lamprecht
Am 23/09/2024 um 13:27 schrieb Lukas Wagner:
> pve-manager has been bumped in the meanwhile, I guess we could now merge the
> remaining patches for pve-docs and proxmox-widget-toolkit?
> They still apply cleanly and a quick test also showed that everything still
> works as expected.

thanks for the reminder, applied the remaining docs and wtk patches now.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse

2024-09-23 Thread Filip Schauer

Changing the way we write files, can eliminate the exponential growth of
write amplification with files of up to 128KiB in size.

Right now we are using `PVE::Tools::file_set_contents`, which itself
uses `print`. And looking at the debug output we can see that this ends
up writing the file contents in 8k blocks.

```
$ echo 1 > /etc/pve/.debug && perl -e 'use PVE::Tools; my $data = 
"a"x(128*1024); PVE::Tools::file_set_contents("/etc/pve/testfile", 
$data);' && echo 0 > /etc/pve/.debug

$ journalctl -n250 -u pve-cluster | grep cfs_fuse_write
Sep 23 15:23:04 pmxcfsbench pmxcfs[16835]: [main] debug: enter 
cfs_fuse_write /testfile.tmp.22487 8192 0 (pmxcfs.c:355:cfs_fuse_write)
Sep 23 15:23:04 pmxcfsbench pmxcfs[16835]: [main] debug: leave 
cfs_fuse_write /testfile.tmp.22487 (8192) (pmxcfs.c:368:cfs_fuse_write)

...
```

So lets change the benchmark script to write all the contents in a
single block.

```diff
@@ -21,10 +21,9 @@
 sub test_write {
 my ($k) = @_;
 system("rm /etc/pve/testfile");
-    my $data = "a"x($k*1024);
 system("sync; echo -n 3 > /proc/sys/vm/drop_caches");
 my $bytes_before = get_bytes_written();
-    PVE::Tools::file_set_contents("/etc/pve/testfile", $data);
+    system("dd if=/dev/urandom of=/etc/pve/testfile bs=${k}k 
count=1 2> /dev/null");

 system("sync; echo -n 3 > /proc/sys/vm/drop_caches");
 my $bytes_after = get_bytes_written();
 return $bytes_after - $bytes_before;
```

Along with the `-obig_writes` patch applied, this gives the following
results:

data size  written  amplification
1  54   54.0
2  33   16.5
4  49   12.3
8  53   6.6
16 62   3.9
32 77   2.4
64 114  1.8
128    178  1.4
256    580  2.3
512    2157 4.2
1024   9844 9.6

With this we write in 128k blocks instead of 8k blocks, eliminating the
rapid write amplification growth up until 128k data size.

It seems that `PVE::Tools::file_set_contents` needs to be optimized to
not write the contents in 8k blocks. Instead of `print` we might want to
use `syswrite`.


On 23/09/2024 13:48, Filip Schauer wrote:

I also ran some benchmarks with the same script.

I created a VM with two virtual disks, (both on an LVM Thin storage)
installed PVE on one disk and set up an ext4 partition on the other.

I stopped pvestatd and pve-cluster,

```
systemctl stop pvestatd
systemctl stop pve-cluster
```

moved the pmxcfs database file to its own disk

```
mv /var/lib/pve-cluster/config.db /tmp/
mount /dev/sdb1 /var/lib/pve-cluster
mv /tmp/config.db /var/lib/pve-cluster/
```

and started pve-cluster again.

```
systemctl start pve-cluster
```

These are my results before and after applying the patch:

data size  written (old)  amplification (old)  written (new) 
amplification (new)

1  48 48   45 45
2  48 24   45 23
4  82 21   80 20
8  121    15   90 11
16 217    14   146    9
32 506    16   314    10
64 1472   23   826    13
128    5585   44   3765   29
256    20424  80   10743  42
512    86715  169  43650  85
1024   369568 361  187496 183

As can be seen, my results are similar with amplification really picking
up at 128k. The patch seems to cut write amplification in half with big
files, while making virtually no difference with files up to 4k.

On 23/09/2024 11:17, Dominik Csapak wrote:

On 9/19/24 16:57, Thomas Lamprecht wrote:

Am 19/09/2024 um 14:45 schrieb Dominik Csapak:

On 9/19/24 14:01, Thomas Lamprecht wrote:

Am 19/09/2024 um 11:52 schrieb Dominik Csapak:
by default libfuse2 limits writes to 4k size, which means that on 
writes
bigger than that, we do a whole write cycle for each 4k block 
that comes
in. To avoid that, add the option 'big_writes' to allow writes 
bigger

than 4k at once.

This should improve pmxcfs performance for situations where we often
write large files (e.g. big ha status) and maybe reduce writes to 
disk.


Should? Something like before/after for benchmark numbers, 
flamegraphs
would be really good to have, without those it's rather hard to 
discuss
this, and I'd like to avoid having to do those, or check the inner 
workings

of the affected fuse userspace/kernel code paths here myself.


well I mean the code change is relatively small and the result is 
rather clear:


Well sure the code change is just setting an option... But the 
actual change is

abstracted away and would benefit from actually looking into..

in the current case we have the following calls from pmxcfs 
(shorten

Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse

2024-09-23 Thread Filip Schauer

I also ran some benchmarks with the same script.

I created a VM with two virtual disks, (both on an LVM Thin storage)
installed PVE on one disk and set up an ext4 partition on the other.

I stopped pvestatd and pve-cluster,

```
systemctl stop pvestatd
systemctl stop pve-cluster
```

moved the pmxcfs database file to its own disk

```
mv /var/lib/pve-cluster/config.db /tmp/
mount /dev/sdb1 /var/lib/pve-cluster
mv /tmp/config.db /var/lib/pve-cluster/
```

and started pve-cluster again.

```
systemctl start pve-cluster
```

These are my results before and after applying the patch:

data size  written (old)  amplification (old)  written (new) 
amplification (new)

1  48 48   45 45
2  48 24   45 23
4  82 21   80 20
8  121    15   90 11
16 217    14   146    9
32 506    16   314    10
64 1472   23   826    13
128    5585   44   3765   29
256    20424  80   10743  42
512    86715  169  43650  85
1024   369568 361  187496 183

As can be seen, my results are similar with amplification really picking
up at 128k. The patch seems to cut write amplification in half with big
files, while making virtually no difference with files up to 4k.

On 23/09/2024 11:17, Dominik Csapak wrote:

On 9/19/24 16:57, Thomas Lamprecht wrote:

Am 19/09/2024 um 14:45 schrieb Dominik Csapak:

On 9/19/24 14:01, Thomas Lamprecht wrote:

Am 19/09/2024 um 11:52 schrieb Dominik Csapak:
by default libfuse2 limits writes to 4k size, which means that on 
writes
bigger than that, we do a whole write cycle for each 4k block that 
comes

in. To avoid that, add the option 'big_writes' to allow writes bigger
than 4k at once.

This should improve pmxcfs performance for situations where we often
write large files (e.g. big ha status) and maybe reduce writes to 
disk.


Should? Something like before/after for benchmark numbers, flamegraphs
would be really good to have, without those it's rather hard to 
discuss
this, and I'd like to avoid having to do those, or check the inner 
workings

of the affected fuse userspace/kernel code paths here myself.


well I mean the code change is relatively small and the result is 
rather clear:


Well sure the code change is just setting an option... But the actual 
change is

abstracted away and would benefit from actually looking into..

in the current case we have the following calls from pmxcfs 
(shortened for e-mail)

when writing a single 128k block:
(dd if=... of=/etc/pve/test bs=128k count=1)


Better than nothing but still no actual numbers (reduced time, 
reduced write amp
in combination with sqlite, ...), some basic analysis over file/write 
size distribution

on a single node and (e.g. three node) cluster, ...
If that's all obvious for you then great, but as already mentioned in 
the past, I
want actual data in commit messages for such stuff, and I cannot 
really see a downside

of having such numbers.

Again, as is I'm not really seeing what's to discuss, you send it as 
RFC after

all.


[...]
so a factor of 32 less calls to cfs_fuse_write (including memdb_pwrite)


That can be huge or not so big at all, i.e. as mentioned above, it 
would we good to

measure the impact through some other metrics.

And FWIW, I used bpftrace to count [0] with an unpatched pmxcfs, 
there I get
the 32 calls to cfs_fuse_write only for a new file, overwriting the 
existing
file again with the same amount of data (128k) just causes a single 
call.
I tried using more data (e.g. from 128k initially to 256k or 512k) 
and it's
always the data divided by 128k (even if the first file has a 
different size)


We do not override existing files often, but rather write to a new 
file and
then rename, but still quite interesting and IMO really showing that 
just
because this is 1 +-1 line change it doesn't necessarily have to be 
trivial

and obvious in its effects.

[0]: bpftrace -e 'u:cfs_fuse_write /str(args->path) == "/test"/ {@ = 
count();} END { print(@) }' -p "$(pidof pmxcfs)"



If we'd change to libfuse3, this would be a non-issue, since that 
option

got removed and is the default there.


I'd prefer that. At least if done with the future PVE 9.0, as I do 
not think

it's a good idea in the middle of a stable release cycle.


why not this change now, and the rewrite to libfuse3 later? that way 
we can

have some improvements now too...


Because I want some actual data and reasoning first, even if it's 
quite likely
that this improves things Somehow™, I'd like to actually know in what 
metrics
and by how much (even if just an upper bound due to the benchmark or 
some

measurement being

Re: [pve-devel] partially-applied: [PATCH many v9 00/13] notifications: notification metadata matching improvements

2024-09-23 Thread Lukas Wagner
On  2024-07-22 19:36, Thomas Lamprecht wrote:
>> Lukas Wagner (5):
>>   api: jobs: vzdump: pass job 'job-id' parameter
>>   ui: dc: backup: allow to set custom job id in  advanced settings
>>   api: notification: add API for getting known metadata fields/values
>>   ui: utils: add overrides for translatable notification fields/values
>>   d/control: bump proxmox-widget-toolkit dependency to 4.1.4
>>
>>  PVE/API2/Backup.pm  |   2 +-
>>  PVE/API2/Cluster/Notifications.pm   | 139 
>>  PVE/API2/VZDump.pm  |  13 +-
>>  PVE/Jobs/VZDump.pm  |   4 +-
>>  PVE/VZDump.pm   |   6 +-
>>  debian/control  |   2 +-
>>  www/manager6/Utils.js   |  11 ++
>>  www/manager6/dc/Backup.js   |   4 -
>>  www/manager6/panel/BackupAdvancedOptions.js |  23 
>>  9 files changed, 192 insertions(+), 12 deletions(-)
>>
> 
> applied above for now, we probably should bump manager soonish and then can
> apply below.
> 

pve-manager has been bumped in the meanwhile, I guess we could now merge the
remaining patches for pve-docs and proxmox-widget-toolkit?
They still apply cleanly and a quick test also showed that everything still
works as expected.


-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel