On Tuesday, February 26, 2019 10:35:49 AM EST Stefan Hajnoczi wrote:
> Suggested-by: Neil Skrypuch
> Signed-off-by: Stefan Hajnoczi
> ---
> qapi/block-core.json | 5 +
> block/file-posix.c | 14 ++
> 2 files changed, 19 insertions(+)
Tested-by: Neil Skryp
On Friday, February 8, 2019 4:48:19 AM EST Dr. David Alan Gilbert wrote:
> * Stefan Hajnoczi (stefa...@redhat.com) wrote:
> > On Thu, Feb 07, 2019 at 05:33:25PM -0500, Neil Skrypuch wrote:
> >
> > Thanks for your email!
> >
> > Please post your QEMU command-li
We (ab)use migration + block mirroring to perform transparent zero downtime VM
backups. Basically:
1) do a block mirror of the source VM's disk
2) migrate the source VM to a destination VM using the disk copy
3) cancel the block mirroring
4) resume the source VM
5) shut down the destination VM gr
This appears to be fixed in the kernel as of
0bc48bea36d178aea9d7f83f66a1b397cec9db5c (merged for 4.13, backported to
RHEL 7.6).
** Changed in: qemu
Status: New => Fix Released
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
h
Actually, migration isn't required to reproduce this issue at all, it is
the stop/cont involved in the migration here that triggers the bug. It
is significantly easier to reproduce the bug with the following steps:
1) on host, adjtimex -f 1000
2) start guest
3) wait 20 minutes
4) stop and
As a further test, I disabled ntpd on the host and ran ntpdate via cron
every 12 hours, so that the clock would be relatively accurate, but no
clock skew would be involved. This also reproduced the failure as
initially described.
This is interesting as it means that a much simpler and faster
repro
Two important findings:
1) If I disable ntpd on the host, this issue goes away.
2) If I forcefully induce substantial clock skew on the host (with adjtimex -f
1000), it becomes much less time intensive to reproduce this issue.
Using the attached reproducer but replacing the 18h sleep wit
Public bug reported:
We (ab)use migration + block mirroring to perform transparent zero
downtime VM backups. Basically:
1) do a block mirror of the source VM's disk
2) migrate the source VM to a destination VM using the disk copy
3) cancel the block mirroring
4) resume the source VM
5) shut down
On October 21, 2016 04:34:38 PM Neil Skrypuch wrote:
> I have a NATting VM (let's call this vm/nat) sitting in front of another VM
> (let's call this one vm/wget), with vm/wget residing in a private virtual
> network, with all network connectivity for vm/wget going through vm/
I have a NATting VM (let's call this vm/nat) sitting in front of another VM
(let's call this one vm/wget), with vm/wget residing in a private virtual
network, with all network connectivity for vm/wget going through vm/nat.
Additionally, I have a web server running on a physical machine from whic
ith the expectation that the peer will call
> qemu_net_queue_flush(). But hw/net/virtio-net.c does not monitor
> vm_running transitions and issue the flush. Hence we're left with a
> broken tap device.
>
> Cc: qemu-sta...@nongnu.org
> Reported-by: Neil Skrypuch
> Signed-o
On Wednesday 05 March 2014 16:59:24 Andreas Färber wrote:
> Am 30.01.2014 19:23, schrieb Neil Skrypuch:
> > First, let me briefly outline the way we use live migration, as it is
> > probably not typical. We use live migration (with block migration) to
> > make backups of V
On Saturday 01 March 2014 10:34:03 陈梁 wrote:
> > On Thursday 30 January 2014 13:23:04 Neil Skrypuch wrote:
> >> First, let me briefly outline the way we use live migration, as it is
> >> probably not typical. We use live migration (with block migration) to
> >> m
On Thursday 30 January 2014 13:23:04 Neil Skrypuch wrote:
> First, let me briefly outline the way we use live migration, as it is
> probably not typical. We use live migration (with block migration) to make
> backups of VMs with zero downtime. The basic process goes like this:
>
>
First, let me briefly outline the way we use live migration, as it is probably
not typical. We use live migration (with block migration) to make backups of
VMs with zero downtime. The basic process goes like this:
1) migrate src VM -> dest VM
2) migration completes
3) cont src VM
4) gracefully s
15 matches
Mail list logo