[pve-devel] [PATCH guest-common 1/4] replication: refactor finding most recent common replication snapshot

2021-10-19 Thread Fabian Ebner
By using a single loop instead. Should make the code more readable, but also more efficient. Suggested-by: Fabian Grünbichler Signed-off-by: Fabian Ebner --- src/PVE/Replication.pm | 24 +--- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/src/PVE/Replication

[pve-devel] [PATCH storage 2/3] plugin: remove volume_snapshot_list

2021-10-19 Thread Fabian Ebner
which was only used by replication, but now replication uses volume_snapshot_info instead. Signed-off-by: Fabian Ebner --- Breaks old pve-guest-common. Requires APIVER+APIAGE bump (it's in the next patch). PVE/Storage.pm | 16 PVE/Storage/Plugin.pm| 9 -

[pve-devel] [PATCH storage 1/3] plugin: add volume_snapshot_info function

2021-10-19 Thread Fabian Ebner
which allows for better choices of common replication snapshots. Signed-off-by: Fabian Ebner --- If this is applied without the following patches, it still needs an APIVER+APIAGE bump and API changelog entry. PVE/Storage.pm | 9 + PVE/Storage/Plugin.pm| 10 ++

[pve-devel] [PATCH manager 3/3] test: replication: remove mocking for obsolete volume_snapshot_list

2021-10-19 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Build-depends on pve-guest-common already using volume_snapshot_info. test/ReplicationTestEnv.pm | 14 -- 1 file changed, 14 deletions(-) diff --git a/test/ReplicationTestEnv.pm b/test/ReplicationTestEnv.pm index 35653e75..883bebca 100755 --- a/test/

[pve-devel] [PATCH guest-common 2/4] replication: prepare: return additional information about snapshots

2021-10-19 Thread Fabian Ebner
This is backwards compatible, because existing users of prepare() only rely on the elements to evaluate to true or be defined. Signed-off-by: Fabian Ebner --- Depends on pve-storage for the new volume_snapshot_info function. Build-breaks old pve-manager, because the test there assumes volume_sn

[pve-devel] [PATCH manager 2/3] test: replication: mock volume_snapshot_info

2021-10-19 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- test/ReplicationTestEnv.pm | 24 ++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/test/ReplicationTestEnv.pm b/test/ReplicationTestEnv.pm index dea1921b..35653e75 100755 --- a/test/ReplicationTestEnv.pm +++ b/test/ReplicationT

[pve-devel] [PATCH storage 3/3] bump APIVER and APIAGE

2021-10-19 Thread Fabian Ebner
Added blockers parameter to volume_rollback_is_possible. Replaced volume_snapshot_list with volume_snapshot_info. Signed-off-by: Fabian Ebner --- ApiChangeLog | 16 PVE/Storage.pm | 4 ++-- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/ApiChangeLog b/ApiChan

[pve-devel] [PATCH-SERIES storage/guest-common/manager] Follow-up for fixing replication/rollback interaction

2021-10-19 Thread Fabian Ebner
Returning more information about snapshots allows for better decisions when picking the incremental base snapshot. Namely, to distinguish between different snapshots with the same name, and to choose a more recent snapshot in some cases, reducing the send delta. On top of that, the code in find_com

[pve-devel] [PATCH manager 1/3] test: replication: avoid implicit return for volume_snapshot

2021-10-19 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- test/ReplicationTestEnv.pm | 2 ++ 1 file changed, 2 insertions(+) diff --git a/test/ReplicationTestEnv.pm b/test/ReplicationTestEnv.pm index 005e6d54..dea1921b 100755 --- a/test/ReplicationTestEnv.pm +++ b/test/ReplicationTestEnv.pm @@ -187,6 +187,8 @@ my $mocked

[pve-devel] [RFC guest-common 4/4] config: snapshot delete check: use volume_snapshot_info

2021-10-19 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- To be applied if the RFC from the original series is: https://lists.proxmox.com/pipermail/pve-devel/2021-August/049705.html src/PVE/AbstractConfig.pm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/PVE/AbstractConfig.pm b/src/PVE/Abstr

[pve-devel] [PATCH guest-common 3/4] replication: find common snapshot: use additional information

2021-10-19 Thread Fabian Ebner
which is now available from the storage back-end. The benefits are: 1. Ability to detect different snapshots even if they have the same name. Rather hard to reach, but for example with: Snapshots A -> B -> C -> __replicationXYZ Remove B, rollback to C (causes __replicationXYZ to be removed), crea

Re: [pve-devel] [PATCH manager 3/3] ui: add 'win11' ostype and set defaults in wizard

2021-10-19 Thread DERUMIER, Alexandre
Le lundi 18 octobre 2021 à 09:10 +0200, Thomas Lamprecht a écrit : Hi, On 18.10.21 07:57, DERUMIER, Alexandre wrote: I don't think that win2022 require tpm , only win11. seems so, but do you think it will hurt if we default to one (in GUI) anyway? I mean it's only a check box to tick off if one d

[pve-devel] [PATCH manager 01/11] api: ceph-mds: get mds state when multple ceph filesystems exist

2021-10-19 Thread Dominik Csapak
by iterating over all of them and saving the name to the active ones this fixes the issue that an mds that is assigned to not the first fs in the list gets wrongly shown as offline Signed-off-by: Dominik Csapak --- PVE/Ceph/Services.pm | 16 +--- 1 file changed, 9 insertions(+), 7 de

[pve-devel] [PATCH manager 03/11] api: cephfs: refactor {ls, create}_fs

2021-10-19 Thread Dominik Csapak
no function change intended Signed-off-by: Dominik Csapak --- PVE/API2/Ceph/FS.pm | 22 ++ PVE/Ceph/Tools.pm | 36 2 files changed, 42 insertions(+), 16 deletions(-) diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm index 82b5d616.

[pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems

2021-10-19 Thread Dominik Csapak
this series support for multiple cephfs. no single patch fixes the bug, so it's in no commit subject... (feel free to change the commit subject when applying if you find one patch most appropriate?) a user already can create multiple cephfs via 'pveceph' (or manually with the ceph tools), but the

[pve-devel] [PATCH storage 1/1] cephfs: add support for multiple ceph filesystems

2021-10-19 Thread Dominik Csapak
by optionally saving the name of the cephfs Signed-off-by: Dominik Csapak --- PVE/Storage/CephFSPlugin.pm | 8 1 file changed, 8 insertions(+) diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm index 3b9a791..f587db7 100644 --- a/PVE/Storage/CephFSPlugin.pm +++ b/PV

[pve-devel] [PATCH manager 09/11] ui: ceph/fs: allow creating multiple cephfs

2021-10-19 Thread Dominik Csapak
but only if there are any standby mds Signed-off-by: Dominik Csapak --- www/manager6/ceph/FS.js | 21 - 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/www/manager6/ceph/FS.js b/www/manager6/ceph/FS.js index c620ec6e..a3fa3672 100644 --- a/www/manager6/ceph/FS.

[pve-devel] [PATCH manager 08/11] ui: storage/cephfs: make ceph fs selectable

2021-10-19 Thread Dominik Csapak
by adding a CephFSSelector and using it in the CephFSEdit window (similar to the poolselector/textfield) Signed-off-by: Dominik Csapak --- www/manager6/Makefile | 1 + www/manager6/form/CephFSSelector.js | 42 + www/manager6/storage/CephFSEdit.js | 25

[pve-devel] [PATCH manager 05/11] ui: ceph/ServiceList: refactor controller out

2021-10-19 Thread Dominik Csapak
we want to reuse that controller type by overriding some functionality in the future, so factor it out. Signed-off-by: Dominik Csapak --- www/manager6/ceph/ServiceList.js | 302 --- 1 file changed, 153 insertions(+), 149 deletions(-) diff --git a/www/manager6/ceph/Se

[pve-devel] [PATCH manager 02/11] ui: ceph: catch missing version for service list

2021-10-19 Thread Dominik Csapak
when a daemon is stopped, the version here is 'undefined'. catch that instead of letting the template renderer run into an error. this fixes the rendering of the grid backgrounds Signed-off-by: Dominik Csapak --- www/manager6/ceph/ServiceList.js | 3 +++ 1 file changed, 3 insertions(+) diff --g

[pve-devel] [PATCH manager 04/11] api: cephfs: more checks on fs create

2021-10-19 Thread Dominik Csapak
namely if the fs is already existing, and if there is currently a standby mds that can be used for the new fs previosuly, only one cephfs was possible, so these checks were not necessary. now with pacific, it is possible to have multiple cephfs' and we should check for those. Signed-off-by: Domini

[pve-devel] [PATCH manager 07/11] api: cephfs: add 'fs-name' for cephfs storage

2021-10-19 Thread Dominik Csapak
so that we can uniquely identify the cephfs (in case of multiple) Signed-off-by: Dominik Csapak --- PVE/API2/Ceph/FS.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm index 845c4fbd..8bf71524 100644 --- a/PVE/API2/Ceph/FS.pm +++ b/PVE/API2/Ceph/FS.p

[pve-devel] [PATCH manager 06/11] ui: ceph/fs: show fs for active mds

2021-10-19 Thread Dominik Csapak
so that the user can see which mds is responsible for which cephfs Signed-off-by: Dominik Csapak --- www/manager6/ceph/FS.js | 2 +- www/manager6/ceph/ServiceList.js | 14 ++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/www/manager6/ceph/FS.js b/www/manage

[pve-devel] [PATCH manager 10/11] api: cephfs: add destroy cephfs api call

2021-10-19 Thread Dominik Csapak
with 'remove-storages' and 'remove-pools' as optional parameters Signed-off-by: Dominik Csapak --- PVE/API2/Ceph/FS.pm | 119 ++ PVE/Ceph/Tools.pm | 15 ++ www/manager6/Utils.js | 1 + 3 files changed, 135 insertions(+) diff --git a/PVE/API2/

[pve-devel] [PATCH manager 11/11] ui: ceph/fs: allow destroying cephfs

2021-10-19 Thread Dominik Csapak
Signed-off-by: Dominik Csapak --- www/manager6/Makefile| 1 + www/manager6/ceph/FS.js | 35 www/manager6/window/SafeDestroyCephFS.js | 22 +++ 3 files changed, 58 insertions(+) create mode 100644 www/manager6/window/SafeD

[pve-devel] [PATCH access-control] fix user deletion when realm does not enforce TFA

2021-10-19 Thread Dominik Csapak
here the existance of the user is only interesting if we want to set data, not if we delete it. Signed-off-by: Dominik Csapak --- src/PVE/AccessControl.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm index fcb16bd..347c2a

Re: [pve-devel] [PATCH manager] pvestatd: fix rebalancing cpusets for cgroupv2

2021-10-19 Thread Aaron Lauterer
tested this by running a few containers with some cores, so that the majority will be used by the containers and then setting the `lxc.cgroup2.cpuset.cpus: 9-12` option for one container and restarting said container. Without the patch, it would not get assigned all cores set with this manual

Re: [pve-devel] partially-applied: [PATCH manager v2 00/12] multi tab disk panel & multi disk wizard

2021-10-19 Thread Aaron Lauterer
On 10/4/21 09:36, Thomas Lamprecht wrote: On 22.09.21 11:27, Dominik Csapak wrote: this series is intended to replace dominics and my previous attempts at this [0][1][2] splits the bandwidth options into their on tab on the disk panel and introduces a 'MultiHDEdit' panel which creates/deletes

Re: [pve-devel] [PATCH v4 firewall 1/2] implement fail2ban backend and API

2021-10-19 Thread Dominik Csapak
while the code looks ok IMHO, i have some general questions: * does it really make sense to hard depend on fail2ban? could it not also make sense to have it as 'recommends' or 'suggests'? setting enabled to 1 could then check if its installed and raise an error * if we do not plan to add mo

Re: [pve-devel] [PATCH v4 manager 2/2] fix #1065: ui: fail2ban gui for nodes

2021-10-19 Thread Dominik Csapak
looks mostly ok (besides my comment about the propertystring and options thing of the previous patch) comment inline: On 10/11/21 12:57, Oguz Bektas wrote: adds a simple grid for fail2ban options into the node config panel --- v4: * no changes www/manager6/Makefile| 1 +

Re: [pve-devel] [PATCH manager v3 0/7] multi disk/mp in wizard

2021-10-19 Thread Lorenz Stechauner
Hi, everything looks and works as expected. No disk/mount point window looks broken. Tested VM/CT creation wizard and adding of disks/mount points afterwards. Tested-By: Lorenz Stechauner On 05.10.21 13:28, Dominik Csapak wrote: this series is a continuation of my previous multi tab / disk

Re: [pve-devel] any plan for zfs over open-iscsi (linux)?

2021-10-19 Thread Travis Osterman
On Mon, Oct 18, 2021 at 11:56 PM Thomas Lamprecht wrote: > On 18.10.21 20:09, Travis Osterman wrote: > > On Mon, Oct 18, 2021 at 12:05 AM Thomas Lamprecht < > t.lampre...@proxmox.com wrote: > >> On 18.10.21 04:04, Travis Osterman wrote: > >>> I think the title says it all. I use open-iscsi for my