Hi Dominik,
I have finished my tests with pci passthrough && mdev,
I didn't have any problem this time, all is working fine for me !
Le 20/09/22 à 14:50, Dominik Csapak a écrit :
> this series aims to add a cluster-wide device mapping for pci and usb devices.
> so that an admin can configure a
This adds a dropdown box for iSCSI, LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/Makefile
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Base.js| 10 +-
www/manager6/storage/IScsiEdit.js | 6 +++---
www/manager6/storage/LVMEdit.js | 14 +++---
www/manager6/storage/LvmThinEdit.js | 18 +-
www/manager6/storage/ZFSPoolEdit.js | 23 +
V1 -> V2:
# pve-storage
* removed because patch is not needed
# pve-manager (1/3)
* remove storage controller from V1
* added custom ComboBox with API URL & setNodeName function
* added scan node selection for iSCSI
* scan node selection field no longer send to server
## (optional) pve-manager (
applied both patches and cleaned up 2 error handlers (one from this
patch and one older one)
On Fri, Sep 23, 2022 at 12:33:51PM +0200, Fabian Grünbichler wrote:
> to make fetching errors from broken repositories non-fatal.
>
> Signed-off-by: Fabian Grünbichler
> ---
> based on top of "extend/add
Hi,
does somebody had time to review this patch series ?
Does I need to rework it ? Any commment ?
Regards,
Alexandre
Le mercredi 24 août 2022 à 13:34 +0200, Alexandre Derumier a écrit :
> This patch add virtio-mem support, through a new maxmemory option.
>
> a 4GB static memory is needed fo
to make fetching errors from broken repositories non-fatal.
Signed-off-by: Fabian Grünbichler
---
based on top of "extend/add commands" series from
20220921081242.1139249-1-f.gruenbich...@proxmox.com
src/bin/proxmox-offline-mirror.rs | 2 ++
src/bin/proxmox_offline_mirror_cmds/conf
the output can get quite long and warnings can easily be missed
otherwise.
Signed-off-by: Fabian Grünbichler
---
src/mirror.rs | 15 +--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/src/mirror.rs b/src/mirror.rs
index e655847..f8afd2b 100644
--- a/src/mirror.rs
+++
Am 22/09/2022 um 12:35 schrieb Dominik Csapak:
>
> the only (minor) thing is that the wt patch could have handled the current
> situation
> (pve raw+value, pbs normalized+value), e.g by using the field 'convert' or
> 'calculate'
> methods of extjs (we could have had a 'real_raw' and 'real_normal
Am 21/07/2022 um 12:45 schrieb Matthias Heiserer:
> This makes it consistent with the naming scheme in PBS/GUI.
> Keep value for API stability reasons and remove it in the next major version.
>
> Signed-off-by: Matthias Heiserer
> ---
> PVE/Diskmanage.pm | 2 ++
> ..
Am 21/07/2022 um 12:45 schrieb Matthias Heiserer:
> This makes it consistent with the naming scheme in PVE/GUI.
> Keep value for API stability reasons, and remove it in next major version.
>
> Signed-off-by: Matthias Heiserer
> ---
> src/tools/disks/smart.rs | 9 +++--
> 1 file changed, 7 in
applied both patches, thanks
On Fri, Sep 23, 2022 at 11:51:13AM +0200, Dominik Csapak wrote:
> includes the following improvements:
> * increases 'force cleanup' timeout to 60s (from 5)
> * saves individual timeout for each vm
> * don't force cleanup for vms where normal cleanup worked
> * sending
On 9/23/22 11:55, Stefan Hanreich wrote:
Signed-off-by: Stefan Hanreich
---
examples/guest-example-hookscript.pl | 12
1 file changed, 12 insertions(+)
diff --git a/examples/guest-example-hookscript.pl
b/examples/guest-example-hookscript.pl
index adeed59..345b5d9 100755
--- a
Signed-off-by: Stefan Hanreich
---
src/PVE/API2/LXC.pm | 4
1 file changed, 4 insertions(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 589f96f..d6ebc08 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1609,6 +1609,8 @@ __PACKAGE__->register_method({
This patch adds pre/post-clone hooks when the when the user clones a CT/VM
from the Web UI / CLI. I have tested this with both VMs/CTs via Web UI and CLI.
Are there any other places where the hook should get triggered that I missed?
Clone is a bit special since it can either target the same node o
Signed-off-by: Stefan Hanreich
---
PVE/API2/Qemu.pm | 4
1 file changed, 4 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 3ec31c2..23a7658 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3417,6 +3417,8 @@ __PACKAGE__->register_method({
my ($conffil
Signed-off-by: Stefan Hanreich
---
examples/guest-example-hookscript.pl | 12
1 file changed, 12 insertions(+)
diff --git a/examples/guest-example-hookscript.pl
b/examples/guest-example-hookscript.pl
index adeed59..345b5d9 100755
--- a/examples/guest-example-hookscript.pl
+++ b/exa
currently, the 'forced_cleanup' (sending SIGKILL to the qemu process),
is intended to be triggered 5 seconds after sending the initial shutdown
signal (SIGTERM) which is sadly not enough for some setups.
Accidentally, it could be triggered earlier than 5 seconds, if a
SIGALRM triggers in the times
includes the following improvements:
* increases 'force cleanup' timeout to 60s (from 5)
* saves individual timeout for each vm
* don't force cleanup for vms where normal cleanup worked
* sending QMP quit instead of SIGTERM (less log noise)
changes from v3:
* merge CleanupData into Client (pidfd,t
this is functionally the same, but sending SIGTERM has the ugly side
effect of printing the following to the log:
> QEMU[]: kvm: terminating on signal 15 from pid (/usr/sbin/qmeventd)
while sending a QMP quit command does not.
Signed-off-by: Dominik Csapak
---
qmeventd/qmeventd.c | 14 +++
On 9/23/22 10:31, Wolfgang Bumiller wrote:
On Thu, Sep 22, 2022 at 04:19:34PM +0200, Dominik Csapak wrote:
instead of always sending a SIGKILL to the target pid.
It was not that much of a problem since the timeout previously was 5
seconds and we used pifds where possible, thus the chance of kill
sorry disregard, was too fast with this version and did not see that
wolfgang wrote something about 2/3 too
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Thu, Sep 22, 2022 at 04:19:34PM +0200, Dominik Csapak wrote:
> instead of always sending a SIGKILL to the target pid.
> It was not that much of a problem since the timeout previously was 5
> seconds and we used pifds where possible, thus the chance of killing the
> wrong process was rather slim.
includes the following improvements:
* increases 'force cleanup' timeout to 60s (from 5)
* saves individual timeout for each vm
* don't force cleanup for vms where normal cleanup worked
* sending QMP quit instead of SIGTERM (less log noise)
changes from v2:
* change from cast of the function to ca
currently, the 'forced_cleanup' (sending SIGKILL to the qemu process),
is intended to be triggered 5 seconds after sending the initial shutdown
signal (SIGTERM) which is sadly not enough for some setups.
Accidentally, it could be triggered earlier than 5 seconds, if a
SIGALRM triggers in the times
instead of always sending a SIGKILL to the target pid.
It was not that much of a problem since the timeout previously was 5
seconds and we used pifds where possible, thus the chance of killing the
wrong process was rather slim.
Now we increased the timeout to 60s which makes the race a bit more li
this is functionally the same, but sending SIGTERM has the ugly side
effect of printing the following to the log:
> QEMU[]: kvm: terminating on signal 15 from pid (/usr/sbin/qmeventd)
while sending a QMP quit command does not.
Signed-off-by: Dominik Csapak
---
qmeventd/qmeventd.c | 14 +++
Only print it when there is a snapshot that would've been removed
without the safeguard. Mostly relevant when a new volume is added to
an already replicated guest.
Fixes replication tests in pve-manager.
Fixes: c0b2948 ("replication: prepare: safeguard against removal if expected
snapshot is mis
On Thu, Sep 22, 2022 at 04:19:33PM +0200, Dominik Csapak wrote:
> currently, the 'forced_cleanup' (sending SIGKILL to the qemu process),
> is intended to be triggered 5 seconds after sending the initial shutdown
> signal (SIGTERM) which is sadly not enough for some setups.
>
> Accidentally, it cou
On Thu, Sep 22, 2022 at 01:37:57PM +0200, Dominik Csapak wrote:
> On 9/22/22 12:14, Matthias Heiserer wrote:
> > On 21.09.2022 14:49, Dominik Csapak wrote:
> > > instead of always sending a SIGKILL to the target pid.
> > > It was not that much of a problem since the timeout previously was 5
> > > s
30 matches
Mail list logo