Re: [Qemu-devel] [PATCH] file-posix: add drop-cache=on|off option

2019-02-26 Thread Neil Skrypuch
On Tuesday, February 26, 2019 10:35:49 AM EST Stefan Hajnoczi wrote: > Suggested-by: Neil Skrypuch > Signed-off-by: Stefan Hajnoczi > --- > qapi/block-core.json | 5 + > block/file-posix.c | 14 ++ > 2 files changed, 19 insertions(+) Tested-by: Neil Skryp

Re: [Qemu-devel] [regression] Clock jump on VM migration

2019-02-08 Thread Neil Skrypuch
On Friday, February 8, 2019 4:48:19 AM EST Dr. David Alan Gilbert wrote: > * Stefan Hajnoczi (stefa...@redhat.com) wrote: > > On Thu, Feb 07, 2019 at 05:33:25PM -0500, Neil Skrypuch wrote: > > > > Thanks for your email! > > > > Please post your QEMU command-li

[Qemu-devel] [regression] Clock jump on VM migration

2019-02-07 Thread Neil Skrypuch
We (ab)use migration + block mirroring to perform transparent zero downtime VM backups. Basically: 1) do a block mirror of the source VM's disk 2) migrate the source VM to a destination VM using the disk copy 3) cancel the block mirroring 4) resume the source VM 5) shut down the destination VM gr

[Qemu-devel] [Bug 1732959] Re: [regression] stop/cont triggers clock jump proportional to host clock drift

2018-12-07 Thread Neil Skrypuch
This appears to be fixed in the kernel as of 0bc48bea36d178aea9d7f83f66a1b397cec9db5c (merged for 4.13, backported to RHEL 7.6). ** Changed in: qemu Status: New => Fix Released -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. h

[Qemu-devel] [Bug 1732959] Re: [regression] Clock jump on source VM after migration

2018-06-21 Thread Neil Skrypuch
Actually, migration isn't required to reproduce this issue at all, it is the stop/cont involved in the migration here that triggers the bug. It is significantly easier to reproduce the bug with the following steps: 1) on host, adjtimex -f 1000 2) start guest 3) wait 20 minutes 4) stop and

[Qemu-devel] [Bug 1732959] Re: [regression] Clock jump on source VM after migration

2018-02-21 Thread Neil Skrypuch
As a further test, I disabled ntpd on the host and ran ntpdate via cron every 12 hours, so that the clock would be relatively accurate, but no clock skew would be involved. This also reproduced the failure as initially described. This is interesting as it means that a much simpler and faster repro

[Qemu-devel] [Bug 1732959] Re: [regression] Clock jump on source VM after migration

2018-02-09 Thread Neil Skrypuch
Two important findings: 1) If I disable ntpd on the host, this issue goes away. 2) If I forcefully induce substantial clock skew on the host (with adjtimex -f 1000), it becomes much less time intensive to reproduce this issue. Using the attached reproducer but replacing the 18h sleep wit

[Qemu-devel] [Bug 1732959] [NEW] [regression] Clock jump on source VM after migration

2017-11-17 Thread Neil Skrypuch
Public bug reported: We (ab)use migration + block mirroring to perform transparent zero downtime VM backups. Basically: 1) do a block mirror of the source VM's disk 2) migrate the source VM to a destination VM using the disk copy 3) cancel the block mirroring 4) resume the source VM 5) shut down

Re: [Qemu-devel] Mysterious RST connection with virtio-net NATting VM

2016-11-02 Thread Neil Skrypuch
On October 21, 2016 04:34:38 PM Neil Skrypuch wrote: > I have a NATting VM (let's call this vm/nat) sitting in front of another VM > (let's call this one vm/wget), with vm/wget residing in a private virtual > network, with all network connectivity for vm/wget going through vm/

[Qemu-devel] Mysterious RST connection with virtio-net NATting VM

2016-10-21 Thread Neil Skrypuch
I have a NATting VM (let's call this vm/nat) sitting in front of another VM (let's call this one vm/wget), with vm/wget residing in a private virtual network, with all network connectivity for vm/wget going through vm/nat. Additionally, I have a web server running on a physical machine from whic

Re: [Qemu-devel] [PATCH] tap: avoid deadlocking rx

2014-03-10 Thread Neil Skrypuch
ith the expectation that the peer will call > qemu_net_queue_flush(). But hw/net/virtio-net.c does not monitor > vm_running transitions and issue the flush. Hence we're left with a > broken tap device. > > Cc: qemu-sta...@nongnu.org > Reported-by: Neil Skrypuch > Signed-o

Re: [Qemu-devel] Live migration results in non-working virtio-net device (sometimes)

2014-03-05 Thread Neil Skrypuch
On Wednesday 05 March 2014 16:59:24 Andreas Färber wrote: > Am 30.01.2014 19:23, schrieb Neil Skrypuch: > > First, let me briefly outline the way we use live migration, as it is > > probably not typical. We use live migration (with block migration) to > > make backups of V

Re: [Qemu-devel] Live migration results in non-working virtio-net device (sometimes)

2014-03-03 Thread Neil Skrypuch
On Saturday 01 March 2014 10:34:03 陈梁 wrote: > > On Thursday 30 January 2014 13:23:04 Neil Skrypuch wrote: > >> First, let me briefly outline the way we use live migration, as it is > >> probably not typical. We use live migration (with block migration) to > >> m

Re: [Qemu-devel] Live migration results in non-working virtio-net device (sometimes)

2014-02-28 Thread Neil Skrypuch
On Thursday 30 January 2014 13:23:04 Neil Skrypuch wrote: > First, let me briefly outline the way we use live migration, as it is > probably not typical. We use live migration (with block migration) to make > backups of VMs with zero downtime. The basic process goes like this: > >

[Qemu-devel] Live migration results in non-working virtio-net device (sometimes)

2014-01-30 Thread Neil Skrypuch
First, let me briefly outline the way we use live migration, as it is probably not typical. We use live migration (with block migration) to make backups of VMs with zero downtime. The basic process goes like this: 1) migrate src VM -> dest VM 2) migration completes 3) cont src VM 4) gracefully s