Re: [pve-devel] [PATCH v3 manager 1/3] api ceph osd: add OSD index, metadata and lv-info

2022-10-24 Thread Dominik Csapak
one small nit (inline) , otherwise the series looks good to me On 10/20/22 15:36, Aaron Lauterer wrote: To get more details for a single OSD, we add two new endpoints: * nodes/{node}/ceph/osd/{osdid}/metadata * nodes/{node}/ceph/osd/{osdid}/lv-info The {osdid} endpoint itself gets a new GET han

Re: [pve-devel] [PATCH pve-common] fix #4299: network : disable_ipv6: fix path checking

2022-10-24 Thread Mark Schouten via pve-devel
--- Begin Message --- Thanks! — Mark Schouten, CTO Tuxis B.V. m...@tuxis.nl -- Original Message -- From "DERUMIER, Alexandre" To "pve-devel@lists.proxmox.com" ; "m...@tuxis.nl" Date 21/10/2022 06:55:08 Subject Re: [pve-devel] [PATCH pve-common] fix #4299: network : disable_ipv6: f

[pve-devel] [PATCH firewall 1/1] fix #4268: add 'force' parameter to delete IPSet with members

2022-10-24 Thread Leo Nunner
Currently, trying to delete a non-empty IPSet will throw an error. Manually deleting all members of the set might be a time-consuming process, which the force parameter allows to bypass. Signed-off-by: Leo Nunner --- src/PVE/API2/Firewall/IPSet.pm | 7 ++- 1 file changed, 6 insertions(+), 1

[pve-devel] [PATCH firewall manager] delete IPset with members

2022-10-24 Thread Leo Nunner
Currently, deleting an IPSet with members is not possible. The user first needs to delete all the members individually, and only then can they delete the IPSet itself. This patch adds a 'force' parameter that enables the deletion of the IPSet and all its members, allowing this step to be bypassed.

[pve-devel] [PATCH manager] fix #4268: add checkbox for force deletion of IPSet

2022-10-24 Thread Leo Nunner
Expose the 'force' parameter through the UI when deleting an IPSet. Small OQL imporovement: the member panel now gets cleared automatically whenever an IPSet is deselected, which is necessary when deleting a non-empty set. Signed-off-by: Leo Nunner --- www/manager6/panel/IPSet.js | 32 ++

[pve-devel] [RFC PATCH acces-control] auth: require quorum for ticket creaton of non PAM users

2022-10-24 Thread Dominik Csapak
when the current node is not part of the quorate partition, there is no way to determine if a user has the proper permissions or is even still enabled, so we should prevent ticket creation for these users. The only exception we should make is for node-local users (PAM realm), since there is a good

[pve-devel] applied: [PATCH V2 pve-manager 0/2] fix #1981: get next free disk id on change of bus/device

2022-10-24 Thread Dominik Csapak
applied both patches, thanks ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] applied: [PATCH proxmox 1/1] section config: parse additional properties when schema allows it

2022-10-24 Thread Wolfgang Bumiller
applied, thanks ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] [PATCH v3 manager 1/3 follow-up] api ceph osd: add OSD index, metadata and lv-info

2022-10-24 Thread Aaron Lauterer
To get more details for a single OSD, we add two new endpoints: * nodes/{node}/ceph/osd/{osdid}/metadata * nodes/{node}/ceph/osd/{osdid}/lv-info The {osdid} endpoint itself gets a new GET handler to return the index. The metadata one provides various metadata regarding the OSD. Such as * process

Re: [pve-devel] [PATCH qemu-server 2/3] fix #3784: Parameter for guest vIOMMU & machine as property-string

2022-10-24 Thread Dominik Csapak
high level comments first: while it seems to work (just tested if there are iommu groups in the vm), i'm missing some reasons for the decisions made here: e.g. i guess we want to enable 'intremap' and that implies 'kernel-irqchip' cannot be 'full', but why do we want 'split' here? also, why cac

Re: [pve-devel] [PATCH qemu-server 3/3] added test-cases for new machine-syntax & viommu

2022-10-24 Thread Dominik Csapak
On 9/21/22 11:07, Markus Frank wrote: added a few test-cases to test the new machine parameter with viommu Signed-off-by: Markus Frank --- test/restore-config-expected/401.conf | 14 + test/restore-config-expected/402.conf | 14 + test/restore-config-input/401.conf

Re: [pve-devel] [PATCH manager] ui: MachineEdit with viommu checkbox

2022-10-24 Thread Dominik Csapak
comments inline: On 9/21/22 11:07, Markus Frank wrote: Added a Checkbox to enable viommu, if q35 is selected. Otherwise (i440fx) the checkbox is disabled. The UI also needs to parse the new machine parameter as PropertyString. Signed-off-by: Markus Frank --- www/manager6/qemu/MachineEdit.js

Re: [pve-devel] [PATCH qemu-server 0/3] vIOMMU-Feature

2022-10-24 Thread Aaron Lauterer
On which CPU vendors have you tested this? We use similar settings in our training env, and even on AMD CPUs it is necessary to follow the Intel steps to enable it in a VM as Qemu implements the Intel variant, apparently. I think this should be added to the (eventual) documentation after veri

[pve-devel] [PATCH firewall/cluster 2/2] register new file firewall/cluster.fw

2022-10-24 Thread Stefan Hrdlicka
added file for cache from bugzilla case #1965 Signed-off-by: Stefan Hrdlicka --- data/PVE/Cluster.pm | 1 + data/src/status.c | 1 + 2 files changed, 2 insertions(+) diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm index abcc46d..2afae73 100644 --- a/data/PVE/Cluster.pm +++ b/data/PVE/C

[pve-devel] [PATCH firewall/cluster 1/2] fix #1965: cache firewall/cluster.fw file

2022-10-24 Thread Stefan Hrdlicka
for large IP sets (for example > 25k) it takes noticable longer to parse the files, this commit caches the cluster.fw file and reduces parsing time Signed-off-by: Stefan Hrdlicka --- src/PVE/Firewall.pm | 110 +++- 1 file changed, 77 insertions(+), 33 dele

[pve-devel] [PATCH firewall/cluster 0/2] cache firewall/cluster.fw

2022-10-24 Thread Stefan Hrdlicka
This patch adds the firewall/cluster.fw to caching. On my system with a list of 25k IP sets CPU consumption for the process went from ~20 % to ~10 % with this caching enabled. Still pretty high but better then before. pve-firewall --- src/PVE/Firewall.pm | 110 +++