By using a single loop instead. Should make the code more readable,
but also more efficient.
Suggested-by: Fabian Grünbichler
Signed-off-by: Fabian Ebner
---
src/PVE/Replication.pm | 24 +---
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/src/PVE/Replication
which was only used by replication, but now replication uses
volume_snapshot_info instead.
Signed-off-by: Fabian Ebner
---
Breaks old pve-guest-common.
Requires APIVER+APIAGE bump (it's in the next patch).
PVE/Storage.pm | 16
PVE/Storage/Plugin.pm| 9 -
which allows for better choices of common replication snapshots.
Signed-off-by: Fabian Ebner
---
If this is applied without the following patches, it still needs an
APIVER+APIAGE bump and API changelog entry.
PVE/Storage.pm | 9 +
PVE/Storage/Plugin.pm| 10 ++
Signed-off-by: Fabian Ebner
---
Build-depends on pve-guest-common already using volume_snapshot_info.
test/ReplicationTestEnv.pm | 14 --
1 file changed, 14 deletions(-)
diff --git a/test/ReplicationTestEnv.pm b/test/ReplicationTestEnv.pm
index 35653e75..883bebca 100755
--- a/test/
This is backwards compatible, because existing users of prepare() only
rely on the elements to evaluate to true or be defined.
Signed-off-by: Fabian Ebner
---
Depends on pve-storage for the new volume_snapshot_info function.
Build-breaks old pve-manager, because the test there assumes
volume_sn
Signed-off-by: Fabian Ebner
---
test/ReplicationTestEnv.pm | 24 ++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/test/ReplicationTestEnv.pm b/test/ReplicationTestEnv.pm
index dea1921b..35653e75 100755
--- a/test/ReplicationTestEnv.pm
+++ b/test/ReplicationT
Added blockers parameter to volume_rollback_is_possible.
Replaced volume_snapshot_list with volume_snapshot_info.
Signed-off-by: Fabian Ebner
---
ApiChangeLog | 16
PVE/Storage.pm | 4 ++--
2 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/ApiChangeLog b/ApiChan
Returning more information about snapshots allows for better decisions
when picking the incremental base snapshot. Namely, to distinguish
between different snapshots with the same name, and to choose a more
recent snapshot in some cases, reducing the send delta. On top of
that, the code in find_com
Signed-off-by: Fabian Ebner
---
test/ReplicationTestEnv.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/test/ReplicationTestEnv.pm b/test/ReplicationTestEnv.pm
index 005e6d54..dea1921b 100755
--- a/test/ReplicationTestEnv.pm
+++ b/test/ReplicationTestEnv.pm
@@ -187,6 +187,8 @@ my $mocked
Signed-off-by: Fabian Ebner
---
To be applied if the RFC from the original series is:
https://lists.proxmox.com/pipermail/pve-devel/2021-August/049705.html
src/PVE/AbstractConfig.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/PVE/AbstractConfig.pm b/src/PVE/Abstr
which is now available from the storage back-end.
The benefits are:
1. Ability to detect different snapshots even if they have the same
name. Rather hard to reach, but for example with:
Snapshots A -> B -> C -> __replicationXYZ
Remove B, rollback to C (causes __replicationXYZ to be removed),
crea
Le lundi 18 octobre 2021 à 09:10 +0200, Thomas Lamprecht a écrit :
Hi,
On 18.10.21 07:57, DERUMIER, Alexandre wrote:
I don't think that win2022 require tpm , only win11.
seems so, but do you think it will hurt if we default to one (in GUI) anyway?
I mean it's only a check box to tick off if one d
by iterating over all of them and saving the name to the active ones
this fixes the issue that an mds that is assigned to not the first
fs in the list gets wrongly shown as offline
Signed-off-by: Dominik Csapak
---
PVE/Ceph/Services.pm | 16 +---
1 file changed, 9 insertions(+), 7 de
no function change intended
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph/FS.pm | 22 ++
PVE/Ceph/Tools.pm | 36
2 files changed, 42 insertions(+), 16 deletions(-)
diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index 82b5d616.
this series support for multiple cephfs. no single patch fixes the bug,
so it's in no commit subject... (feel free to change the commit subject
when applying if you find one patch most appropriate?)
a user already can create multiple cephfs via 'pveceph' (or manually
with the ceph tools), but the
by optionally saving the name of the cephfs
Signed-off-by: Dominik Csapak
---
PVE/Storage/CephFSPlugin.pm | 8
1 file changed, 8 insertions(+)
diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
index 3b9a791..f587db7 100644
--- a/PVE/Storage/CephFSPlugin.pm
+++ b/PV
but only if there are any standby mds
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/FS.js | 21 -
1 file changed, 8 insertions(+), 13 deletions(-)
diff --git a/www/manager6/ceph/FS.js b/www/manager6/ceph/FS.js
index c620ec6e..a3fa3672 100644
--- a/www/manager6/ceph/FS.
by adding a CephFSSelector and using it in the CephFSEdit window
(similar to the poolselector/textfield)
Signed-off-by: Dominik Csapak
---
www/manager6/Makefile | 1 +
www/manager6/form/CephFSSelector.js | 42 +
www/manager6/storage/CephFSEdit.js | 25
we want to reuse that controller type by overriding some functionality
in the future, so factor it out.
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/ServiceList.js | 302 ---
1 file changed, 153 insertions(+), 149 deletions(-)
diff --git a/www/manager6/ceph/Se
when a daemon is stopped, the version here is 'undefined'. catch that
instead of letting the template renderer run into an error.
this fixes the rendering of the grid backgrounds
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/ServiceList.js | 3 +++
1 file changed, 3 insertions(+)
diff --g
namely if the fs is already existing, and if there is currently a
standby mds that can be used for the new fs
previosuly, only one cephfs was possible, so these checks were not
necessary. now with pacific, it is possible to have multiple cephfs'
and we should check for those.
Signed-off-by: Domini
so that we can uniquely identify the cephfs (in case of multiple)
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph/FS.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/API2/Ceph/FS.pm b/PVE/API2/Ceph/FS.pm
index 845c4fbd..8bf71524 100644
--- a/PVE/API2/Ceph/FS.pm
+++ b/PVE/API2/Ceph/FS.p
so that the user can see which mds is responsible for which cephfs
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/FS.js | 2 +-
www/manager6/ceph/ServiceList.js | 14 ++
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/www/manager6/ceph/FS.js b/www/manage
with 'remove-storages' and 'remove-pools' as optional parameters
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph/FS.pm | 119 ++
PVE/Ceph/Tools.pm | 15 ++
www/manager6/Utils.js | 1 +
3 files changed, 135 insertions(+)
diff --git a/PVE/API2/
Signed-off-by: Dominik Csapak
---
www/manager6/Makefile| 1 +
www/manager6/ceph/FS.js | 35
www/manager6/window/SafeDestroyCephFS.js | 22 +++
3 files changed, 58 insertions(+)
create mode 100644 www/manager6/window/SafeD
here the existance of the user is only interesting if we want to set
data, not if we delete it.
Signed-off-by: Dominik Csapak
---
src/PVE/AccessControl.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
index fcb16bd..347c2a
tested this by running a few containers with some cores, so that the majority
will be used by the containers and then setting the `lxc.cgroup2.cpuset.cpus:
9-12` option for one container and restarting said container.
Without the patch, it would not get assigned all cores set with this manual
On 10/4/21 09:36, Thomas Lamprecht wrote:
On 22.09.21 11:27, Dominik Csapak wrote:
this series is intended to replace dominics and my previous attempts
at this [0][1][2]
splits the bandwidth options into their on tab on the disk panel and
introduces a 'MultiHDEdit' panel which creates/deletes
while the code looks ok IMHO, i have some general questions:
* does it really make sense to hard depend on fail2ban?
could it not also make sense to have it as 'recommends' or 'suggests'?
setting enabled to 1 could then check if its installed and
raise an error
* if we do not plan to add mo
looks mostly ok (besides my comment about the propertystring and options
thing of the previous patch)
comment inline:
On 10/11/21 12:57, Oguz Bektas wrote:
adds a simple grid for fail2ban options into the node config panel
---
v4:
* no changes
www/manager6/Makefile| 1 +
Hi,
everything looks and works as expected. No disk/mount point window looks
broken.
Tested VM/CT creation wizard and adding of disks/mount points afterwards.
Tested-By: Lorenz Stechauner
On 05.10.21 13:28, Dominik Csapak wrote:
this series is a continuation of my previous multi tab / disk
On Mon, Oct 18, 2021 at 11:56 PM Thomas Lamprecht
wrote:
> On 18.10.21 20:09, Travis Osterman wrote:
> > On Mon, Oct 18, 2021 at 12:05 AM Thomas Lamprecht <
> t.lampre...@proxmox.com wrote:
> >> On 18.10.21 04:04, Travis Osterman wrote:
> >>> I think the title says it all. I use open-iscsi for my
32 matches
Mail list logo