Re: [pve-devel] [PATCH many v3] add cluster-wide hardware device mapping

2022-09-23 Thread DERUMIER, Alexandre
Hi Dominik, I have finished my tests with pci passthrough && mdev, I didn't have any problem this time, all is working fine for me ! Le 20/09/22 à 14:50, Dominik Csapak a écrit : > this series aims to add a cluster-wide device mapping for pci and usb devices. > so that an admin can configure a

[pve-devel] [PATCH V5 pve-manager 1/2] fix #2822: add iscsi, lvm, lvmthin & zfs storage for all cluster nodes

2022-09-23 Thread Stefan Hrdlicka
This adds a dropdown box for iSCSI, LVM, LVMThin & ZFS storage options where a cluster node needs to be chosen. As default the current node is selected. It restricts the the storage to be only availabe on the selected node. Signed-off-by: Stefan Hrdlicka --- www/manager6/Makefile

[pve-devel] [PATCH V5 pve-manager 2/2] cleanup: "var" to "let", fix some indentation in related files

2022-09-23 Thread Stefan Hrdlicka
Signed-off-by: Stefan Hrdlicka --- www/manager6/storage/Base.js| 10 +- www/manager6/storage/IScsiEdit.js | 6 +++--- www/manager6/storage/LVMEdit.js | 14 +++--- www/manager6/storage/LvmThinEdit.js | 18 +- www/manager6/storage/ZFSPoolEdit.js | 23 +

[pve-devel] [PATCH V5 pve-manager 0/2] fix #2822: add iscsi, lvm, lvmthin & zfs

2022-09-23 Thread Stefan Hrdlicka
V1 -> V2: # pve-storage * removed because patch is not needed # pve-manager (1/3) * remove storage controller from V1 * added custom ComboBox with API URL & setNodeName function * added scan node selection for iSCSI * scan node selection field no longer send to server ## (optional) pve-manager (

[pve-devel] applied: [PATCH proxmox-offline-mirror 1/2] fix #4259: mirror: add ignore-errors option

2022-09-23 Thread Wolfgang Bumiller
applied both patches and cleaned up 2 error handlers (one from this patch and one older one) On Fri, Sep 23, 2022 at 12:33:51PM +0200, Fabian Grünbichler wrote: > to make fetching errors from broken repositories non-fatal. > > Signed-off-by: Fabian Grünbichler > --- > based on top of "extend/add

Re: [pve-devel] [PATCH V2 qemu-server 0/2] add virtio-mem support

2022-09-23 Thread DERUMIER, Alexandre
Hi, does somebody had time to review this patch series ? Does I need to rework it ? Any commment ? Regards, Alexandre Le mercredi 24 août 2022 à 13:34 +0200, Alexandre Derumier a écrit : > This patch add virtio-mem support, through a new maxmemory option. > > a 4GB static memory is needed fo

[pve-devel] [PATCH proxmox-offline-mirror 1/2] fix #4259: mirror: add ignore-errors option

2022-09-23 Thread Fabian Grünbichler
to make fetching errors from broken repositories non-fatal. Signed-off-by: Fabian Grünbichler --- based on top of "extend/add commands" series from 20220921081242.1139249-1-f.gruenbich...@proxmox.com src/bin/proxmox-offline-mirror.rs | 2 ++ src/bin/proxmox_offline_mirror_cmds/conf

[pve-devel] [PATCH proxmox-offline-mirror 2/2] mirror: collect and summarize warnings

2022-09-23 Thread Fabian Grünbichler
the output can get quite long and warnings can easily be missed otherwise. Signed-off-by: Fabian Grünbichler --- src/mirror.rs | 15 +-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/src/mirror.rs b/src/mirror.rs index e655847..f8afd2b 100644 --- a/src/mirror.rs +++

Re: [pve-devel] [pbs-devel] [PATCH proxmox-backup] fix #4165: SMART: add raw field

2022-09-23 Thread Thomas Lamprecht
Am 22/09/2022 um 12:35 schrieb Dominik Csapak: > > the only (minor) thing is that the wt patch could have handled the current  > situation > (pve raw+value, pbs normalized+value), e.g by using the field 'convert' or  > 'calculate' > methods of extjs (we could have had a 'real_raw' and 'real_normal

[pve-devel] applied: [pbs-devel] [PATCH pve-storage] fix #4165: disk: SMART: add normalized field

2022-09-23 Thread Thomas Lamprecht
Am 21/07/2022 um 12:45 schrieb Matthias Heiserer: > This makes it consistent with the naming scheme in PBS/GUI. > Keep value for API stability reasons and remove it in the next major version. > > Signed-off-by: Matthias Heiserer > --- > PVE/Diskmanage.pm | 2 ++ > ..

[pve-devel] applied: [pbs-devel] [PATCH proxmox-backup] fix #4165: SMART: add raw field

2022-09-23 Thread Thomas Lamprecht
Am 21/07/2022 um 12:45 schrieb Matthias Heiserer: > This makes it consistent with the naming scheme in PVE/GUI. > Keep value for API stability reasons, and remove it in next major version. > > Signed-off-by: Matthias Heiserer > --- > src/tools/disks/smart.rs | 9 +++-- > 1 file changed, 7 in

[pve-devel] applied-series: [PATCH qemu-server v4 0/2] qmeventd: improve shutdown behaviour

2022-09-23 Thread Wolfgang Bumiller
applied both patches, thanks On Fri, Sep 23, 2022 at 11:51:13AM +0200, Dominik Csapak wrote: > includes the following improvements: > * increases 'force cleanup' timeout to 60s (from 5) > * saves individual timeout for each vm > * don't force cleanup for vms where normal cleanup worked > * sending

Re: [pve-devel] [PATCH pve-docs 1/1] add pre/post-clone events to example hookscript

2022-09-23 Thread Stefan Hanreich
On 9/23/22 11:55, Stefan Hanreich wrote: Signed-off-by: Stefan Hanreich --- examples/guest-example-hookscript.pl | 12 1 file changed, 12 insertions(+) diff --git a/examples/guest-example-hookscript.pl b/examples/guest-example-hookscript.pl index adeed59..345b5d9 100755 --- a

[pve-devel] [PATCH pve-container 1/1] Add CT hooks for pre/post-clone

2022-09-23 Thread Stefan Hanreich
Signed-off-by: Stefan Hanreich --- src/PVE/API2/LXC.pm | 4 1 file changed, 4 insertions(+) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index 589f96f..d6ebc08 100644 --- a/src/PVE/API2/LXC.pm +++ b/src/PVE/API2/LXC.pm @@ -1609,6 +1609,8 @@ __PACKAGE__->register_method({

[pve-devel] [PATCH qemu-server/pve-container/pve-docs 0/1] Add pre/post-clone hooks

2022-09-23 Thread Stefan Hanreich
This patch adds pre/post-clone hooks when the when the user clones a CT/VM from the Web UI / CLI. I have tested this with both VMs/CTs via Web UI and CLI. Are there any other places where the hook should get triggered that I missed? Clone is a bit special since it can either target the same node o

[pve-devel] [PATCH qemu-server 1/1] Add VM hooks for pre/post-clone

2022-09-23 Thread Stefan Hanreich
Signed-off-by: Stefan Hanreich --- PVE/API2/Qemu.pm | 4 1 file changed, 4 insertions(+) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 3ec31c2..23a7658 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -3417,6 +3417,8 @@ __PACKAGE__->register_method({ my ($conffil

[pve-devel] [PATCH pve-docs 1/1] add pre/post-clone events to example hookscript

2022-09-23 Thread Stefan Hanreich
Signed-off-by: Stefan Hanreich --- examples/guest-example-hookscript.pl | 12 1 file changed, 12 insertions(+) diff --git a/examples/guest-example-hookscript.pl b/examples/guest-example-hookscript.pl index adeed59..345b5d9 100755 --- a/examples/guest-example-hookscript.pl +++ b/exa

[pve-devel] [PATCH qemu-server v4 1/2] qmeventd: rework 'forced_cleanup' handling and set timeout to 60s

2022-09-23 Thread Dominik Csapak
currently, the 'forced_cleanup' (sending SIGKILL to the qemu process), is intended to be triggered 5 seconds after sending the initial shutdown signal (SIGTERM) which is sadly not enough for some setups. Accidentally, it could be triggered earlier than 5 seconds, if a SIGALRM triggers in the times

[pve-devel] [PATCH qemu-server v4 0/2] qmeventd: improve shutdown behaviour

2022-09-23 Thread Dominik Csapak
includes the following improvements: * increases 'force cleanup' timeout to 60s (from 5) * saves individual timeout for each vm * don't force cleanup for vms where normal cleanup worked * sending QMP quit instead of SIGTERM (less log noise) changes from v3: * merge CleanupData into Client (pidfd,t

[pve-devel] [PATCH qemu-server v4 2/2] qmeventd: send QMP 'quit' command instead of SIGTERM

2022-09-23 Thread Dominik Csapak
this is functionally the same, but sending SIGTERM has the ugly side effect of printing the following to the log: > QEMU[]: kvm: terminating on signal 15 from pid (/usr/sbin/qmeventd) while sending a QMP quit command does not. Signed-off-by: Dominik Csapak --- qmeventd/qmeventd.c | 14 +++

Re: [pve-devel] [PATCH qemu-server v2 2/3] qmeventd: cancel 'forced cleanup' when normal cleanup succeeds

2022-09-23 Thread Dominik Csapak
On 9/23/22 10:31, Wolfgang Bumiller wrote: On Thu, Sep 22, 2022 at 04:19:34PM +0200, Dominik Csapak wrote: instead of always sending a SIGKILL to the target pid. It was not that much of a problem since the timeout previously was 5 seconds and we used pifds where possible, thus the chance of kill

Re: [pve-devel] [PATCH qemu-server v3 0/3] qmeventd: improve shutdown behaviour

2022-09-23 Thread Dominik Csapak
sorry disregard, was too fast with this version and did not see that wolfgang wrote something about 2/3 too ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] [PATCH qemu-server v2 2/3] qmeventd: cancel 'forced cleanup' when normal cleanup succeeds

2022-09-23 Thread Wolfgang Bumiller
On Thu, Sep 22, 2022 at 04:19:34PM +0200, Dominik Csapak wrote: > instead of always sending a SIGKILL to the target pid. > It was not that much of a problem since the timeout previously was 5 > seconds and we used pifds where possible, thus the chance of killing the > wrong process was rather slim.

[pve-devel] [PATCH qemu-server v3 0/3] qmeventd: improve shutdown behaviour

2022-09-23 Thread Dominik Csapak
includes the following improvements: * increases 'force cleanup' timeout to 60s (from 5) * saves individual timeout for each vm * don't force cleanup for vms where normal cleanup worked * sending QMP quit instead of SIGTERM (less log noise) changes from v2: * change from cast of the function to ca

[pve-devel] [PATCH qemu-server v3 1/3] qmeventd: rework 'forced_cleanup' handling and set timeout to 60s

2022-09-23 Thread Dominik Csapak
currently, the 'forced_cleanup' (sending SIGKILL to the qemu process), is intended to be triggered 5 seconds after sending the initial shutdown signal (SIGTERM) which is sadly not enough for some setups. Accidentally, it could be triggered earlier than 5 seconds, if a SIGALRM triggers in the times

[pve-devel] [PATCH qemu-server v3 2/3] qmeventd: cancel 'forced cleanup' when normal cleanup succeeds

2022-09-23 Thread Dominik Csapak
instead of always sending a SIGKILL to the target pid. It was not that much of a problem since the timeout previously was 5 seconds and we used pifds where possible, thus the chance of killing the wrong process was rather slim. Now we increased the timeout to 60s which makes the race a bit more li

[pve-devel] [PATCH qemu-server v3 3/3] qmeventd: send QMP 'quit' command instead of SIGTERM

2022-09-23 Thread Dominik Csapak
this is functionally the same, but sending SIGTERM has the ugly side effect of printing the following to the log: > QEMU[]: kvm: terminating on signal 15 from pid (/usr/sbin/qmeventd) while sending a QMP quit command does not. Signed-off-by: Dominik Csapak --- qmeventd/qmeventd.c | 14 +++

[pve-devel] [PATCH v2 guest-common] replication: avoid "expected snapshot missing" warning when irrelevant

2022-09-23 Thread Fiona Ebner
Only print it when there is a snapshot that would've been removed without the safeguard. Mostly relevant when a new volume is added to an already replicated guest. Fixes replication tests in pve-manager. Fixes: c0b2948 ("replication: prepare: safeguard against removal if expected snapshot is mis

Re: [pve-devel] [PATCH qemu-server v2 1/3] qmeventd: rework 'forced_cleanup' handling and set timeout to 60s

2022-09-23 Thread Wolfgang Bumiller
On Thu, Sep 22, 2022 at 04:19:33PM +0200, Dominik Csapak wrote: > currently, the 'forced_cleanup' (sending SIGKILL to the qemu process), > is intended to be triggered 5 seconds after sending the initial shutdown > signal (SIGTERM) which is sadly not enough for some setups. > > Accidentally, it cou

Re: [pve-devel] [PATCH qemu-server 2/3] qmeventd: cancel 'forced cleanup' when normal cleanup succeeds

2022-09-23 Thread Wolfgang Bumiller
On Thu, Sep 22, 2022 at 01:37:57PM +0200, Dominik Csapak wrote: > On 9/22/22 12:14, Matthias Heiserer wrote: > > On 21.09.2022 14:49, Dominik Csapak wrote: > > > instead of always sending a SIGKILL to the target pid. > > > It was not that much of a problem since the timeout previously was 5 > > > s