one small nit (inline) , otherwise the series looks good to me
On 10/20/22 15:36, Aaron Lauterer wrote:
To get more details for a single OSD, we add two new endpoints:
* nodes/{node}/ceph/osd/{osdid}/metadata
* nodes/{node}/ceph/osd/{osdid}/lv-info
The {osdid} endpoint itself gets a new GET han
--- Begin Message ---
Thanks!
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl
-- Original Message --
From "DERUMIER, Alexandre"
To "pve-devel@lists.proxmox.com" ;
"m...@tuxis.nl"
Date 21/10/2022 06:55:08
Subject Re: [pve-devel] [PATCH pve-common] fix #4299: network :
disable_ipv6: f
Currently, trying to delete a non-empty IPSet will throw an error.
Manually deleting all members of the set might be a time-consuming
process, which the force parameter allows to bypass.
Signed-off-by: Leo Nunner
---
src/PVE/API2/Firewall/IPSet.pm | 7 ++-
1 file changed, 6 insertions(+), 1
Currently, deleting an IPSet with members is not possible. The user
first needs to delete all the members individually, and only then can
they delete the IPSet itself. This patch adds a 'force' parameter that
enables the deletion of the IPSet and all its members, allowing this
step to be bypassed.
Expose the 'force' parameter through the UI when deleting an IPSet.
Small OQL imporovement: the member panel now gets cleared
automatically whenever an IPSet is deselected, which is necessary
when deleting a non-empty set.
Signed-off-by: Leo Nunner
---
www/manager6/panel/IPSet.js | 32 ++
when the current node is not part of the quorate partition, there is no
way to determine if a user has the proper permissions or is even still
enabled, so we should prevent ticket creation for these users.
The only exception we should make is for node-local users (PAM realm),
since there is a good
applied both patches, thanks
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied, thanks
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
To get more details for a single OSD, we add two new endpoints:
* nodes/{node}/ceph/osd/{osdid}/metadata
* nodes/{node}/ceph/osd/{osdid}/lv-info
The {osdid} endpoint itself gets a new GET handler to return the index.
The metadata one provides various metadata regarding the OSD.
Such as
* process
high level comments first:
while it seems to work (just tested if there are iommu groups in the vm),
i'm missing some reasons for the decisions made here:
e.g. i guess we want to enable 'intremap' and that implies
'kernel-irqchip' cannot be 'full', but why do we want 'split' here?
also, why cac
On 9/21/22 11:07, Markus Frank wrote:
added a few test-cases to test the new machine parameter with viommu
Signed-off-by: Markus Frank
---
test/restore-config-expected/401.conf | 14 +
test/restore-config-expected/402.conf | 14 +
test/restore-config-input/401.conf
comments inline:
On 9/21/22 11:07, Markus Frank wrote:
Added a Checkbox to enable viommu, if q35 is selected.
Otherwise (i440fx) the checkbox is disabled.
The UI also needs to parse the new machine parameter as PropertyString.
Signed-off-by: Markus Frank
---
www/manager6/qemu/MachineEdit.js
On which CPU vendors have you tested this?
We use similar settings in our training env, and even on AMD CPUs it is
necessary to follow the Intel steps to enable it in a VM as Qemu implements the
Intel variant, apparently.
I think this should be added to the (eventual) documentation after veri
added file for cache from bugzilla case #1965
Signed-off-by: Stefan Hrdlicka
---
data/PVE/Cluster.pm | 1 +
data/src/status.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index abcc46d..2afae73 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE/C
for large IP sets (for example > 25k) it takes noticable longer to parse the
files, this commit caches the cluster.fw file and reduces parsing time
Signed-off-by: Stefan Hrdlicka
---
src/PVE/Firewall.pm | 110 +++-
1 file changed, 77 insertions(+), 33 dele
This patch adds the firewall/cluster.fw to caching. On my system with a
list of 25k IP sets CPU consumption for the process went from ~20 % to
~10 % with this caching enabled. Still pretty high but better then
before.
pve-firewall
---
src/PVE/Firewall.pm | 110 +++
16 matches
Mail list logo