instead of always sending a SIGKILL to the target pid.
It was not that much of a problem since the timeout previously was 5
seconds and we used pifds where possible, thus the chance of killing the
wrong process was rather slim.
Now we increased the timeout to 60s which makes the race a bit more li
currently, the 'forced_cleanup' (sending SIGKILL to the qemu process),
is intended to be triggered 5 seconds after sending the initial shutdown
signal (SIGTERM) which is sadly not enough for some setups.
Accidentally, it could be triggered earlier than 5 seconds, if a
SIGALRM triggers in the times
includes the following improvements:
* increases 'force cleanup' timeout to 60s (from 5)
* saves individual timeout for each vm
* don't force cleanup for vms where normal cleanup worked
* sending QMP quit instead of SIGTERM (less log noise)
i opted for variant 1 of wbumillers suggestions, as it yi
this is functionally the same, but sending SIGTERM has the ugly side
effect of printing the following to the log:
> QEMU[]: kvm: terminating on signal 15 from pid (/usr/sbin/qmeventd)
while sending a QMP quit command does not.
Signed-off-by: Dominik Csapak
---
qmeventd/qmeventd.c | 14 +++
This patch adds pre/post-migrate hooks when the when the user migrates a CT/VM
from the Web UI / CLI. I have tested this with both VMs/CTs via Web UI and CLI.
Are there any other places where the hook should get triggered that I missed?
I have decided to create distinct event types for source/targ
Signed-off-by: Stefan Hanreich
---
src/PVE/AbstractMigrate.pm | 14 ++
1 file changed, 14 insertions(+)
diff --git a/src/PVE/AbstractMigrate.pm b/src/PVE/AbstractMigrate.pm
index d90e5b7..5e03488 100644
--- a/src/PVE/AbstractMigrate.pm
+++ b/src/PVE/AbstractMigrate.pm
@@ -178,6 +178,
Signed-off-by: Stefan Hanreich
---
PVE/CLI/qm.pm | 50 +++
PVE/QemuMigrate.pm| 23
test/MigrationTest/QemuMigrateMock.pm | 4 +++
3 files changed, 77 insertions(+)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 6
Signed-off-by: Stefan Hanreich
---
examples/guest-example-hookscript.pl | 28
1 file changed, 28 insertions(+)
diff --git a/examples/guest-example-hookscript.pl
b/examples/guest-example-hookscript.pl
index adeed59..e4adf6d 100755
--- a/examples/guest-example-hookscr
Signed-off-by: Stefan Hanreich
---
src/PVE/CLI/pct.pm | 51 ++
src/PVE/LXC/Migrate.pm | 23 +++
2 files changed, 74 insertions(+)
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 23793ee..e80586d 100755
--- a/src/PVE/CLI/pct.p
On Thu, Sep 22, 2022 at 03:19:43PM +0200, Stefan Hanreich wrote:
> Signed-off-by: Stefan Hanreich
> ---
>
> There might be a better way to differentiate the different errors from
> restorefn in the error handling logic, although I think in this case it is
> still fine. This might get a bit messy
On Thu, Sep 22, 2022 at 03:19:42PM +0200, Stefan Hanreich wrote:
> Signed-off-by: Stefan Hanreich
> ---
> src/PVE/API2/LXC.pm | 6 ++
> 1 file changed, 6 insertions(+)
>
> diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
> index 589f96f..3ecf5e5 100644
> --- a/src/PVE/API2/LXC.pm
> +++
Signed-off-by: Stefan Hanreich
---
There might be a better way to differentiate the different errors from
restorefn in the error handling logic, although I think in this case it is
still fine. This might get a bit messy though if in the future someone adds
another source for errors. Maybe add a s
Signed-off-by: Stefan Hanreich
---
src/PVE/API2/LXC.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 589f96f..3ecf5e5 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -376,6 +376,8 @@ __PACKAGE__->register_method({
Signed-off-by: Stefan Hanreich
---
Similar to the snapshot patch series, I have refrained from continuing the
numbering. This should be harmonized in a follow-up patch.
examples/guest-example-hookscript.pl | 14 ++
1 file changed, 14 insertions(+)
diff --git a/examples/guest-exampl
This patch adds hooks that run when the user restores a backup from the Web UI
/ CLI. I have tested this with both VMs/CTs via Web UI and CLI. Are there any
other places where the hook should get triggered that I missed?
pve-container:
Stefan Hanreich (1):
Add pre/post-restore hooks to CTs
sr
On Thu, Sep 22, 2022 at 02:22:45PM +0200, Dominik Csapak wrote:
> On 9/22/22 14:01, Wolfgang Bumiller wrote:
> > On Thu, Sep 22, 2022 at 01:31:49PM +0200, Dominik Csapak wrote:
> > > [snip]
> > > > > -/*
> > > > > - * SIGALRM and cleanup handling
> > > > > - *
> > > > > - * terminate_client will se
On 9/22/22 14:01, Wolfgang Bumiller wrote:
On Thu, Sep 22, 2022 at 01:31:49PM +0200, Dominik Csapak wrote:
[snip]
-/*
- * SIGALRM and cleanup handling
- *
- * terminate_client will set an alarm for 5 seconds and add its client's PID to
- * the forced_cleanups list - when the timer expires, we i
On Thu, Sep 22, 2022 at 01:31:49PM +0200, Dominik Csapak wrote:
> [snip]
> > > -/*
> > > - * SIGALRM and cleanup handling
> > > - *
> > > - * terminate_client will set an alarm for 5 seconds and add its client's
> > > PID to
> > > - * the forced_cleanups list - when the timer expires, we iterate t
Signed-off-by: Stefan Hanreich
---
The example script currently enumerates the different phases (first, second,
...). I have opted to not continue this enumeration as I couldn't see any
particular reason for this and I will add lots of new phases in subsequent patch
series. Am I missing a particu
This patch adds hooks that run when the user creates a snapshot from the Web UI
/ CLI. I have tested this with both VMs/CTs via Web UI and CLI. Are there any
other places where the hook should get triggered that I missed?
pve-guest-common:
Stefan Hanreich (1):
add pre/post-snapshot hooks
src/
Signed-off-by: Stefan Hanreich
---
I have opted to include the snapshot hooks in the abstract class, since this
seemed like the more straightforward way.
The other option would have been duplicating the calls of the hooks into the
respective Backends, but I couldn't see any upsides to this.
This
Am 21/09/2022 um 14:49 schrieb Dominik Csapak:
> +fprintf(stderr, " -t timeout kill timeout (default: %ds)\n",
> DEFAULT_KILL_TIMEOUT);
just a nit, as you probably send a v2 for Fabian's comments anyway:
Please do s/timeout// i.e.:
-t kill timeo..
That has two advantages, first the use
On 9/22/22 12:14, Matthias Heiserer wrote:
On 21.09.2022 14:49, Dominik Csapak wrote:
instead of always sending a SIGKILL to the target pid.
It was not that much of a problem since the timeout previously was 5
seconds and we used pifds where possible, thus the chance of killing the
wrong process
[snip]
-/*
- * SIGALRM and cleanup handling
- *
- * terminate_client will set an alarm for 5 seconds and add its client's PID to
- * the forced_cleanups list - when the timer expires, we iterate the list and
- * attempt to issue SIGKILL to all processes which haven't yet stopped.
- */
-
-static v
Am 22.09.22 um 13:00 schrieb Fiona Ebner:
> Namely, when there are no snapshots at all on the volume, which also
> is the case when the volume doesn't exist. This happens when a fresh
> volume is added to an already replicated guest.
The warning still triggers, when some other snapshot is added to
Namely, when there are no snapshots at all on the volume, which also
is the case when the volume doesn't exist. This happens when a fresh
volume is added to an already replicated guest.
Fixes replication tests in pve-manager, which didn't like the additional
output.
Fixes: c0b2948 ("replication:
all 3 patches LGTM
the only (minor) thing is that the wt patch could have handled the current
situation
(pve raw+value, pbs normalized+value), e.g by using the field 'convert' or
'calculate'
methods of extjs (we could have had a 'real_raw' and 'real_normalized' that uses
the values depending on
On 21.09.2022 14:49, Dominik Csapak wrote:
instead of always sending a SIGKILL to the target pid.
It was not that much of a problem since the timeout previously was 5
seconds and we used pifds where possible, thus the chance of killing the
wrong process was rather slim.
Now we increased the time
Example from the community forum:
https://forum.proxmox.com/threads/112225/
Signed-off-by: Fiona Ebner
---
qm-cloud-init.adoc | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/qm-cloud-init.adoc b/qm-cloud-init.adoc
index 8fb69a0..8363f10 100644
--- a/qm-cloud-init.
>From time to time, users install the cloud-init package on the host
by accident [0]. Since the other commands are intended to be run on
the host, add a warning about the one command that shouldn't.
Patch suggested by @Neobin on the community forum [1].
[0] https://forum.proxmox.com/threads/11553
On September 21, 2022 2:49 pm, Dominik Csapak wrote:
> currently, the 'forced_cleanup' (sending SIGKILL to the qemu process),
> is intended to be triggered 5 seconds after sending the initial shutdown
> signal (SIGTERM) which is sadly not enough for some setups.
>
> Accidentally, it could be trigg
Am 19.09.22 um 13:00 schrieb Thomas Lamprecht:
> Bit torn on this one, otoh it's small and might help some, otoh I think that
> it shouldn't be required - there's OS in the tab title already and a disk tab
> two steps ahead, that paired with the fact that there's clearly written
> "CD/DVD
> drive
32 matches
Mail list logo