Hi Thomas,
We are still seeing this every once in a while. I can definitely tell
you that it is connected to older Linux Guest kernels and we have not
been able to identify a specific version which would make searching for
a fix commit a bit easier.
We are going to upgrade all our host kernels to
We have found a vm that recovered from a freeze and it seems like it has jumped
in time.
Below I have pasted a dump of tty1, it is ocr'd though so some characters could
have been misinterpreted.
hild
[13198552.767867] le-rss:010 Killed process 10374 (crop) total,r,4376400,
anon-rss,018, tl [13
t;" || $? != 0 ]]; then
ret=1
fi
exit $ret
--
* Fixing the bug
This patch introduces the use of safe_execveat instead of
safe_execve for the emulation of execve. By using the do_openat
function, we ensure that the executable file descriptor is really
the
, since the
former is now useless.
Signed-off-by: Olivier Dion
---
linux-user/syscall.c | 16 +---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index e2af3c1494..68340bcb67 100644
--- a/linux-user/syscall.c
+++ b/linux-user
Hi, this is an update after some extended tests and a fallback migration
to 4.14.
After doing another >10k migrations we are sure to say that we also encounter
this issue on kernel 4.14.
We migrate vpses from servers in serial (one after the other) mode. And we
notice that on some servers we enc
On 2019-08-23T12:58:43-0400, Laurent Vivier wrote:
> Le 07/08/2019 à 15:54, d...@linutronix.de a écrit :
> > From: Olivier Dion
> >
> > If not handled, QEMU will execve itself instead of the emulated
> > process. This could result in potential security risk.
> &
From: Olivier Dion
When the emulated process try to execve itself through /proc/self/exe,
QEMU user will be executed instead of the process.
The following short program demonstrated that:
--
#include
#include
#include
From: Olivier Dion
If not handled, QEMU will execve itself instead of the emulated
process. This could result in potential security risk.
Signed-off-by: Olivier Dion
---
linux-user/syscall.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/linux-user
x27;t, which I would rather not do but I
think I understand why having the exact same frequency is prefered, what
is your thought on this matter?
Disabling kvm-clock is not really an option as we don't want to restart
and login on all of the running vms.
Dion
--
You received this bug no
nning, which cpu are you migrating
from/to, which kernel version are you running from/to?
Could you dump the xml in this comment section from a defined guest that
was frozen?
Dion
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
It seems like the patch is applied to the guests source kernel.
crash> clock_event_device
struct clock_event_device {
void (*event_handler)(struct clock_event_device *);
int (*set_next_event)(unsigned long, struct clock_event_device *);
int (*set_next_ktime)(ktime_t, struct clock_event
Is there a way that we can check that it indeed is the case that the clock
jumped a bit, we can try to read some kernel variables.
We just got another hung guest's crash dump working, this vm also shows a weird
uptime
DATE: Fri Dec 23 09:06:16 2603
UPTIME: 106752 days, 00:10:35
T
Hi Alan,
Dmesg shows nothing special:
[29891577.708544] IPv6 addrconf: prefix with wrong length 48
[29891580.650637] IPv6 addrconf: prefix with wrong length 48
[29891582.013656] IPv6 addrconf: prefix with wrong length 48
[29891583.753246] IPv6 addrconf: prefix with wrong length 48
[29891585.39794
A virsh dumpxml of one of the guests:
vps12
-953c-d629-1276-0616
4194304
4194304
2
/machine
hvm
Westmere
destroy
restart
restart
/usr/bin/kvm
https://bugs.launchpad.net/qemu/+bug/1831225
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/177
Title:
guest migration 100% cpu freeze bug
Status in QEMU:
Fix Released
Bug description:
# I
Public bug reported:
# Investigate migration cpu hog(100%) bug
I have some issues when migrating from kernel 4.14.63 running qemu 2.11.2 to
kernel 4.19.43 running qemu 2.11.2.
The hypervisors are running on debian jessie with libvirt v5.3.0.
Linux, libvirt and qemu are all custom compiled.
I mi
I do like to add that I only saw this when the source we migrate from is
running on a relatively new CPU: Intel(R) Xeon(R) Gold 6126 CPU @
2.60GHz.
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz
stepping: 4
guest xml definition:
vps25
0cf4666d-6855-b3a8-12da-2967563f
8388608
8388608
4
/machine
hvm
Westmere
destroy
restart
restart
/usr/bin/kvm
734003200
57
Public bug reported:
# Investigate migration cpu hog(100%) bug
I have some issues when migrating from qemu 2.6.2 to qemu 2.11.1.
The hypervisors are running kernel 4.9.92 on debian stretch with libvirt v4.0.0.
Linux, libvirt and qemu are all custom compiled.
I migrated around 21.000 vms from qem
Yeah I have a use case, before a last sync on a storage migration we suspend a
VM -> send the last diffs -> mount the new storage server and after that we
change a symlink -> call reopen -> check if all file descriptors are changed
before resuming the VM.
Dion
> Op 22 mrt. 2018
so it checks use_lock from BDRVRawState and
tries to reopen lock_fd accordingly
- change raw_reopen_commit so it closes the old lock_fd on use_lock
Signed-off-by: Dion Bosschieter
---
block/file-posix.c | 25 +
1 file changed, 25 insertions(+)
diff --git a/block/file-posix.c
21 matches
Mail list logo