On Fri, Oct 10, 2014 at 09:33:04AM +0200, Marcin Gibuła wrote:
> >Does anybody know why the APIC state loaded by the first call to
> >kvm_arch_get_registers() is wrong, in the first place? What exactly is
> >different in the APIC state in the second kvm_arch_get_registers() call,
> >and when/why do
Does anybody know why the APIC state loaded by the first call to
kvm_arch_get_registers() is wrong, in the first place? What exactly is
different in the APIC state in the second kvm_arch_get_registers() call,
and when/why does it change?
If cpu_synchronize_state() does the wrong thing if it is ca
On Mon, Aug 04, 2014 at 06:30:09PM +0200, Marcin Gibuła wrote:
> W dniu 2014-07-31 13:27, Marcin Gibuła pisze:
> >>>Can you dump *env before and after the call to kvm_arch_get_registers?
> >>
> >>Yes, but it seems they are equal - I used memcmp() to compare them. Is
> >>there any other side effect
On Thu, Sep 4, 2014 at 10:54 PM, Marcelo Tosatti wrote:
> On Thu, Sep 04, 2014 at 03:54:01PM -0300, Marcelo Tosatti wrote:
>> On Thu, Sep 04, 2014 at 08:52:00PM +0400, Andrey Korolyov wrote:
>> > On Thu, Sep 4, 2014 at 8:38 PM, Marcelo Tosatti
>> > wrote:
>> > > On Sun, Aug 24, 2014 at 10:51:38P
On Thu, Sep 04, 2014 at 03:54:01PM -0300, Marcelo Tosatti wrote:
> On Thu, Sep 04, 2014 at 08:52:00PM +0400, Andrey Korolyov wrote:
> > On Thu, Sep 4, 2014 at 8:38 PM, Marcelo Tosatti wrote:
> > > On Sun, Aug 24, 2014 at 10:51:38PM +0400, Andrey Korolyov wrote:
> > >> On Sun, Aug 24, 2014 at 8:57
On Thu, Sep 04, 2014 at 08:52:00PM +0400, Andrey Korolyov wrote:
> On Thu, Sep 4, 2014 at 8:38 PM, Marcelo Tosatti wrote:
> > On Sun, Aug 24, 2014 at 10:51:38PM +0400, Andrey Korolyov wrote:
> >> On Sun, Aug 24, 2014 at 8:57 PM, Andrey Korolyov wrote:
> >> > On Sun, Aug 24, 2014 at 8:35 PM, Paolo
On Thu, Sep 4, 2014 at 8:38 PM, Marcelo Tosatti wrote:
> On Sun, Aug 24, 2014 at 10:51:38PM +0400, Andrey Korolyov wrote:
>> On Sun, Aug 24, 2014 at 8:57 PM, Andrey Korolyov wrote:
>> > On Sun, Aug 24, 2014 at 8:35 PM, Paolo Bonzini wrote:
>> >> Il 24/08/2014 18:19, Andrey Korolyov ha scritto:
>
On Sun, Aug 24, 2014 at 10:51:38PM +0400, Andrey Korolyov wrote:
> On Sun, Aug 24, 2014 at 8:57 PM, Andrey Korolyov wrote:
> > On Sun, Aug 24, 2014 at 8:35 PM, Paolo Bonzini wrote:
> >> Il 24/08/2014 18:19, Andrey Korolyov ha scritto:
> >>> Sorry, I was a bit inaccurate in my thoughts at Fri abou
On Mon, Aug 25, 2014 at 2:45 PM, Paolo Bonzini wrote:
> Il 24/08/2014 22:14, Andrey Korolyov ha scritto:
>> Forgot to mention, _actual_ patch from above. Adding
>> cpu_synchronize_all_states() bringing old bug with lost interrupts
>> back.
>
> Are you adding it before or after cpu_clean_all_dirty?
Il 24/08/2014 22:14, Andrey Korolyov ha scritto:
> Forgot to mention, _actual_ patch from above. Adding
> cpu_synchronize_all_states() bringing old bug with lost interrupts
> back.
Are you adding it before or after cpu_clean_all_dirty?
Paolo
Forgot to mention, _actual_ patch from above. Adding
cpu_synchronize_all_states() bringing old bug with lost interrupts
back.
On Sun, Aug 24, 2014 at 8:57 PM, Andrey Korolyov wrote:
> On Sun, Aug 24, 2014 at 8:35 PM, Paolo Bonzini wrote:
>> Il 24/08/2014 18:19, Andrey Korolyov ha scritto:
>>> Sorry, I was a bit inaccurate in my thoughts at Fri about necessary
>>> amount of work, patch lays perfectly on 3.10 with bit of
On Sun, Aug 24, 2014 at 8:35 PM, Paolo Bonzini wrote:
> Il 24/08/2014 18:19, Andrey Korolyov ha scritto:
>> Sorry, I was a bit inaccurate in my thoughts at Fri about necessary
>> amount of work, patch lays perfectly on 3.10 with bit of monkey
>> rewrites. The attached one fixed problem for me - it
Il 24/08/2014 18:19, Andrey Korolyov ha scritto:
> Sorry, I was a bit inaccurate in my thoughts at Fri about necessary
> amount of work, patch lays perfectly on 3.10 with bit of monkey
> rewrites. The attached one fixed problem for me - it represents
> 0b10a1c87a2b0fb459baaefba9cb163dbb8d3344,
> 0b
Sorry, I was a bit inaccurate in my thoughts at Fri about necessary
amount of work, patch lays perfectly on 3.10 with bit of monkey
rewrites. The attached one fixed problem for me - it represents
0b10a1c87a2b0fb459baaefba9cb163dbb8d3344,
0bc830b05c667218d703f2026ec866c49df974fc,
44847dea79751e95665
>
> Andrey,
>
> Can you give instructions on how to reproduce please?
>
Please find answers inline:
> - qemu.git codebase (if you have any patches relative to a
> given commit id, please provide the patches).
rolled to bare 2.1-release to reproduce, for 3.10 I am hitting issue
with and without
On Fri, Aug 22, 2014 at 04:05:46PM -0300, Marcelo Tosatti wrote:
> On Fri, Aug 22, 2014 at 04:05:07PM -0300, Marcelo Tosatti wrote:
> > On Fri, Aug 22, 2014 at 10:39:38PM +0400, Andrey Korolyov wrote:
> > > On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti
> > > wrote:
> > > > On Fri, Aug 22, 2014
On Fri, Aug 22, 2014 at 11:05 PM, Marcelo Tosatti wrote:
> On Fri, Aug 22, 2014 at 04:05:07PM -0300, Marcelo Tosatti wrote:
>> On Fri, Aug 22, 2014 at 10:39:38PM +0400, Andrey Korolyov wrote:
>> > On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti
>> > wrote:
>> > > On Fri, Aug 22, 2014 at 08:44:5
On Fri, Aug 22, 2014 at 04:05:07PM -0300, Marcelo Tosatti wrote:
> On Fri, Aug 22, 2014 at 10:39:38PM +0400, Andrey Korolyov wrote:
> > On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti
> > wrote:
> > > On Fri, Aug 22, 2014 at 08:44:53PM +0400, Andrey Korolyov wrote:
> > >> >
> > >> > I`m running
On Fri, Aug 22, 2014 at 10:39:38PM +0400, Andrey Korolyov wrote:
> On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti wrote:
> > On Fri, Aug 22, 2014 at 08:44:53PM +0400, Andrey Korolyov wrote:
> >> >
> >> > I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
> >> > problem will b
On Fri, Aug 22, 2014 at 9:45 PM, Marcelo Tosatti wrote:
> On Fri, Aug 22, 2014 at 08:44:53PM +0400, Andrey Korolyov wrote:
>> >
>> > I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
>> > problem will be fixed, it still will be specific for 2.1, earlier
>> > releases working w
Il 22/08/2014 18:44, Andrey Korolyov ha scritto:
>>
>> I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
>> problem will be fixed, it still will be specific for 2.1, earlier
>> releases working well and I`ll bisect at a time.
>
> Thanks, using 3.16 helped indeed. Though the bu
On Fri, Aug 22, 2014 at 08:44:53PM +0400, Andrey Korolyov wrote:
> >
> > I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
> > problem will be fixed, it still will be specific for 2.1, earlier
> > releases working well and I`ll bisect at a time.
>
> Thanks, using 3.16 helped i
>
> I`m running 3.10, so patches are not here, will try 3.16 soon. Even if
> problem will be fixed, it still will be specific for 2.1, earlier
> releases working well and I`ll bisect at a time.
Thanks, using 3.16 helped indeed. Though the bug remains as is at 2.1
on LTS 3.10, should I find the bre
On Thu, Aug 21, 2014 at 8:44 PM, Paolo Bonzini wrote:
> Il 21/08/2014 18:41, Andrey Korolyov ha scritto:
>> Sorry, the test series revealed that the problem is still here, but
>> with lower hit ratio with modified 2.1-HEAD using selected argument
>> set. The actual root of the issue is in '-cpu
>>
Il 21/08/2014 18:41, Andrey Korolyov ha scritto:
> Sorry, the test series revealed that the problem is still here, but
> with lower hit ratio with modified 2.1-HEAD using selected argument
> set. The actual root of the issue is in '-cpu
> qemu64,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1000'
> Wi
On Thu, Aug 21, 2014 at 7:48 PM, Andrey Korolyov wrote:
> On Sat, Aug 9, 2014 at 10:35 AM, Paolo Bonzini wrote:
>>
>>> > Yeah, I need to sit down and look at the code more closely... Perhaps a
>>> > cpu_mark_all_dirty() is enough.
>>>
>>> Hi Paolo,
>>>
>>> cpu_clean_all_dirty, you mean? Has the
On Sat, Aug 9, 2014 at 10:35 AM, Paolo Bonzini wrote:
>
>> > Yeah, I need to sit down and look at the code more closely... Perhaps a
>> > cpu_mark_all_dirty() is enough.
>>
>> Hi Paolo,
>>
>> cpu_clean_all_dirty, you mean? Has the same effect.
>>
>> Marcin's patch to add cpu_synchronize_state_alw
> > Yeah, I need to sit down and look at the code more closely... Perhaps a
> > cpu_mark_all_dirty() is enough.
>
> Hi Paolo,
>
> cpu_clean_all_dirty, you mean? Has the same effect.
>
> Marcin's patch to add cpu_synchronize_state_always() has the same
> effect.
>
> What do you prefer ?
I'd p
On Mon, Aug 04, 2014 at 08:30:48PM +0200, Paolo Bonzini wrote:
> Il 04/08/2014 18:30, Marcin Gibuła ha scritto:
> >
> >
> > is this analysis deep enough for you? I don't know if that can be fixed
> > with existing api as cpu_synchronize_all_states() is all or nothing kind
> > of stuff.
> >
> > K
Il 04/08/2014 18:30, Marcin Gibuła ha scritto:
>
>
> is this analysis deep enough for you? I don't know if that can be fixed
> with existing api as cpu_synchronize_all_states() is all or nothing kind
> of stuff.
>
> Kvmclock needs it only to read current cpu registers, so syncing
> everything is
W dniu 2014-07-31 13:27, Marcin Gibuła pisze:
Can you dump *env before and after the call to kvm_arch_get_registers?
Yes, but it seems they are equal - I used memcmp() to compare them. Is
there any other side effect that cpu_synchronize_all_states() may have?
I think I found it.
The reason f
Can you dump *env before and after the call to kvm_arch_get_registers?
Yes, but it seems they are equal - I used memcmp() to compare them. Is
there any other side effect that cpu_synchronize_all_states() may have?
I think I found it.
The reason for hang is, because when second call to
kvm_ar
W dniu 2014-07-30 15:38, Paolo Bonzini pisze:
Il 30/07/2014 14:02, Marcin Gibuła ha scritto:
without it:
s/without/with/ of course...
called do_kvm_cpu_synchronize_state_always
called do_kvm_cpu_synchronize_state_always
called do_kvm_cpu_synchronize_state: vcpu not dirty, getting registers
c
Il 30/07/2014 14:02, Marcin Gibuła ha scritto:
> without it:
>
> called do_kvm_cpu_synchronize_state_always
> called do_kvm_cpu_synchronize_state_always
> called do_kvm_cpu_synchronize_state: vcpu not dirty, getting registers
> called do_kvm_cpu_synchronize_state: vcpu not dirty, getting registers
On 29.07.2014 18:58, Paolo Bonzini wrote:
Il 18/07/2014 10:48, Paolo Bonzini ha scritto:
It is easy to find out if the "fix" is related to 1 or 2/3: just write
if (cpu->kvm_vcpu_dirty) {
printf ("do_kvm_cpu_synchronize_state_always: look at 2/3\n");
kvm_arch_get_regis
Il 18/07/2014 10:48, Paolo Bonzini ha scritto:
>
> It is easy to find out if the "fix" is related to 1 or 2/3: just write
>
> if (cpu->kvm_vcpu_dirty) {
> printf ("do_kvm_cpu_synchronize_state_always: look at 2/3\n");
> kvm_arch_get_registers(cpu);
> } else {
>
W dniu 2014-07-18 11:37, Paolo Bonzini pisze:
Il 18/07/2014 11:32, Marcin Gibuła ha scritto:
3) the next CPU entry will call kvm_arch_put_registers:
if (cpu->kvm_vcpu_dirty) {
kvm_arch_put_registers(cpu, KVM_PUT_RUNTIME_STATE);
cpu->kvm_vcpu_dirty = false;
Il 18/07/2014 11:32, Marcin Gibuła ha scritto:
3) the next CPU entry will call kvm_arch_put_registers:
if (cpu->kvm_vcpu_dirty) {
kvm_arch_put_registers(cpu, KVM_PUT_RUNTIME_STATE);
cpu->kvm_vcpu_dirty = false;
}
But, I don't set cpu->kvm_vcpu_dirt
The name of the hack^Wfunction is tricky, because compared to
do_kvm_cpu_synchronize_state there are three things you change:
1) you always synchronize the state
2) the next call to do_kvm_cpu_synchronize_state will do
kvm_arch_get_registers
Yes.
3) the next CPU entry will call kvm_arch_put_
On (Fri) 18 Jul 2014 [10:48:40], Paolo Bonzini wrote:
> Il 17/07/2014 15:25, Marcin Gibuła ha scritto:
> >+static void do_kvm_cpu_synchronize_state_always(void *arg)
> >+{
> >+CPUState *cpu = arg;
> >+
> >+kvm_arch_get_registers(cpu);
> >+}
> >+
>
> The name of the hack^Wfunction is tricky
Il 18/07/2014 10:44, Marcin Gibuła ha scritto:
Paolo,
if patch in its current form is not acceptable for you for inclusion,
I'll try rewrite it according to your comments.
The problem is that we don't know _why_ the patch is fixing things.
Considering that your kvmclock bug has been there
Il 17/07/2014 15:25, Marcin Gibuła ha scritto:
+static void do_kvm_cpu_synchronize_state_always(void *arg)
+{
+CPUState *cpu = arg;
+
+kvm_arch_get_registers(cpu);
+}
+
The name of the hack^Wfunction is tricky, because compared to
do_kvm_cpu_synchronize_state there are three things you
Does it fix problem with libvirt migration timing out for you as well?
Oh, forgot to mention - yes, all migration-related problems are fixed.
Though release right now in a freeze phase, I`d like to ask
maintainers to consider possibility of fixing the problem on top of
the current tree instead
On Fri, Jul 18, 2014 at 12:21 PM, Marcin Gibuła wrote:
>>> could you try attached patch? It's an incredibly ugly workaround that
>>> calls
>>> cpu_synchronize_all_states() in a way that bypasses lazy execution logic.
>>>
>>> But it works for me. If that works for you as well, its somehow related
>
could you try attached patch? It's an incredibly ugly workaround that calls
cpu_synchronize_all_states() in a way that bypasses lazy execution logic.
But it works for me. If that works for you as well, its somehow related to
lazy execution of cpu_synchronize_all_states.
--
mg
Yes, it working w
On Thu, Jul 17, 2014 at 5:25 PM, Marcin Gibuła wrote:
>>> 2.1-rc2 behaves exactly the same.
>>>
>>> Interestingly enough, reseting guest system causes I/O to work again. So
>>> it's not qemu that hangs on IO, rather it fails to notify guest about
>>> completed operations that were issued during mi
W dniu 2014-07-17 21:18, Dr. David Alan Gilbert pisze:
I don't know if this is the same case, but Gerd showed me a migration failure
that might be related. 2.0 seems OK, 2.1-rc0 is broken (and I've not found
another working point in between yet).
The test case involves booting a fedora livecd (
I don't know if this is the same case, but Gerd showed me a migration failure
that might be related. 2.0 seems OK, 2.1-rc0 is broken (and I've not found
another working point in between yet).
The test case involves booting a fedora livecd (using an IDE CDROM device)
and after the migration we're
2.1-rc2 behaves exactly the same.
Interestingly enough, reseting guest system causes I/O to work again. So
it's not qemu that hangs on IO, rather it fails to notify guest about
completed operations that were issued during migration.
And its somehow caused by calling cpu_synchronize_all_states()
On Thu, Jul 17, 2014 at 3:54 PM, Marcin Gibuła wrote:
>>> Yes, exactly. ISCSI-based setup can take some minutes to deploy, given
>>> prepared image, and I have one hundred percent hit rate for the
>>> original issue with it.
>>
>>
>> I've reproduced your IO hang with 2.0 and both
>> 9b1786829aefb8
Yes, exactly. ISCSI-based setup can take some minutes to deploy, given
prepared image, and I have one hundred percent hit rate for the
original issue with it.
I've reproduced your IO hang with 2.0 and both
9b1786829aefb83f37a8f3135e3ea91c56001b56 and
a096b3a6732f846ec57dc28b47ee9435aa0609bf appl
I've reproduced your IO hang with 2.0 and both
9b1786829aefb83f37a8f3135e3ea91c56001b56 and
a096b3a6732f846ec57dc28b47ee9435aa0609bf applied.
Reverting 9b1786829aefb83f37a8f3135e3ea91c56001b56 indeed fixes the
problem (but reintroduces block-migration hang). It's seems like qemu
bug rather than g
I'm using both of them applied on top of 2.0 in production and have no
problems with them. I'm using NFS exclusively with cache=none.
So, I shall test vm-migration and drive-migration with 2.1.0-rc2 with no
extra patches applied or reverted, on VM that is running fio, am I correct?
Yes, exactl
On Thu, Jul 17, 2014 at 1:28 AM, Marcin Gibuła wrote:
>> Tested on iscsi pool, though there are no-cache requirement and rbd
>> with disabled cache may survive one migration but iscsi backend hangs
>> always. As it was before, just rolling back problematic commit fixes
>> the problem and adding cp
Tested on iscsi pool, though there are no-cache requirement and rbd
with disabled cache may survive one migration but iscsi backend hangs
always. As it was before, just rolling back problematic commit fixes
the problem and adding cpu_synchronize_all_states to migration.c has
no difference at a gla
On Wed, Jul 16, 2014 at 5:24 PM, Andrey Korolyov wrote:
> On Wed, Jul 16, 2014 at 3:52 PM, Marcelo Tosatti wrote:
>> On Wed, Jul 16, 2014 at 12:38:51PM +0400, Andrey Korolyov wrote:
>>> On Wed, Jul 16, 2014 at 5:16 AM, Marcelo Tosatti
>>> wrote:
>>> > On Wed, Jul 16, 2014 at 03:40:47AM +0400, A
On Wed, Jul 16, 2014 at 3:52 PM, Marcelo Tosatti wrote:
> On Wed, Jul 16, 2014 at 12:38:51PM +0400, Andrey Korolyov wrote:
>> On Wed, Jul 16, 2014 at 5:16 AM, Marcelo Tosatti wrote:
>> > On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
>> >> On Wed, Jul 16, 2014 at 2:01 AM, Paolo
On Wed, Jul 16, 2014 at 09:35:16AM +0200, Marcin Gibuła wrote:
> >Andrey,
> >
> >Can you please provide instructions on how to create reproducible
> >environment?
> >
> >The following patch is equivalent to the original patch, for
> >the purposes of fixing the kvmclock problem.
> >
> >Perhaps it be
On Wed, Jul 16, 2014 at 12:38:51PM +0400, Andrey Korolyov wrote:
> On Wed, Jul 16, 2014 at 5:16 AM, Marcelo Tosatti wrote:
> > On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
> >> On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini wrote:
> >> > Il 15/07/2014 23:25, Andrey Korolyov ha
On Wed, Jul 16, 2014 at 5:16 AM, Marcelo Tosatti wrote:
> On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
>> On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini wrote:
>> > Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
>> >
>> >> On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti
>>
Andrey,
Can you please provide instructions on how to create reproducible
environment?
The following patch is equivalent to the original patch, for
the purposes of fixing the kvmclock problem.
Perhaps it becomes easier to spot the reason for the hang you are
experiencing.
Marcelo,
the origin
On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
> On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini wrote:
> > Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
> >
> >> On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti
> >> wrote:
> >>>
> >>> On Tue, Jul 15, 2014 at 06:01:08PM +0400,
On Wed, Jul 16, 2014 at 03:40:47AM +0400, Andrey Korolyov wrote:
> On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini wrote:
> > Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
> >
> >> On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti
> >> wrote:
> >>>
> >>> On Tue, Jul 15, 2014 at 06:01:08PM +0400,
On Wed, Jul 16, 2014 at 2:01 AM, Paolo Bonzini wrote:
> Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
>
>> On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti
>> wrote:
>>>
>>> On Tue, Jul 15, 2014 at 06:01:08PM +0400, Andrey Korolyov wrote:
On Tue, Jul 15, 2014 at 10:52 AM, Andrey Koro
Il 15/07/2014 23:25, Andrey Korolyov ha scritto:
On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti wrote:
On Tue, Jul 15, 2014 at 06:01:08PM +0400, Andrey Korolyov wrote:
On Tue, Jul 15, 2014 at 10:52 AM, Andrey Korolyov wrote:
On Tue, Jul 15, 2014 at 9:03 AM, Amit Shah wrote:
On (Sun) 13 J
On Wed, Jul 16, 2014 at 1:09 AM, Marcelo Tosatti wrote:
> On Tue, Jul 15, 2014 at 06:01:08PM +0400, Andrey Korolyov wrote:
>> On Tue, Jul 15, 2014 at 10:52 AM, Andrey Korolyov wrote:
>> > On Tue, Jul 15, 2014 at 9:03 AM, Amit Shah wrote:
>> >> On (Sun) 13 Jul 2014 [16:28:56], Andrey Korolyov wro
On Tue, Jul 15, 2014 at 06:01:08PM +0400, Andrey Korolyov wrote:
> On Tue, Jul 15, 2014 at 10:52 AM, Andrey Korolyov wrote:
> > On Tue, Jul 15, 2014 at 9:03 AM, Amit Shah wrote:
> >> On (Sun) 13 Jul 2014 [16:28:56], Andrey Korolyov wrote:
> >>> Hello,
> >>>
> >>> the issue is not specific to the
On Tue, Jul 15, 2014 at 9:32 PM, Andrey Korolyov wrote:
> On Tue, Jul 15, 2014 at 7:57 PM, Paolo Bonzini wrote:
>> Il 13/07/2014 17:29, Andrey Korolyov ha scritto:
>>
>>> Small follow-up: issue have probabilistic nature, as it looks - by
>>> limited number of runs, it is reproducible within three
On Tue, Jul 15, 2014 at 7:57 PM, Paolo Bonzini wrote:
> Il 13/07/2014 17:29, Andrey Korolyov ha scritto:
>
>> Small follow-up: issue have probabilistic nature, as it looks - by
>> limited number of runs, it is reproducible within three cases:
>> 1) live migration went well, I/O locked up,
>> 2)
Il 13/07/2014 17:29, Andrey Korolyov ha scritto:
Small follow-up: issue have probabilistic nature, as it looks - by
limited number of runs, it is reproducible within three cases:
1) live migration went well, I/O locked up,
2) live migration failed by timeout, I/O locked up,
3) live migration w
On Tue, Jul 15, 2014 at 10:52 AM, Andrey Korolyov wrote:
> On Tue, Jul 15, 2014 at 9:03 AM, Amit Shah wrote:
>> On (Sun) 13 Jul 2014 [16:28:56], Andrey Korolyov wrote:
>>> Hello,
>>>
>>> the issue is not specific to the iothread code because generic
>>> virtio-blk also hangs up:
>>
>> Do you know
On Tue, Jul 15, 2014 at 9:03 AM, Amit Shah wrote:
> On (Sun) 13 Jul 2014 [16:28:56], Andrey Korolyov wrote:
>> Hello,
>>
>> the issue is not specific to the iothread code because generic
>> virtio-blk also hangs up:
>
> Do you know which version works well? If you could bisect, that'll
> help a l
On (Sun) 13 Jul 2014 [16:28:56], Andrey Korolyov wrote:
> Hello,
>
> the issue is not specific to the iothread code because generic
> virtio-blk also hangs up:
Do you know which version works well? If you could bisect, that'll
help a lot.
Thanks,
Amit
On Sun, Jul 13, 2014 at 4:28 PM, Andrey Korolyov wrote:
> Hello,
>
> the issue is not specific to the iothread code because generic
> virtio-blk also hangs up:
>
> Given code set like in the
> http://www.mail-archive.com/qemu-devel@nongnu.org/msg246164.html,
> launch a VM with virtio-blk disk and
Hello,
the issue is not specific to the iothread code because generic
virtio-blk also hangs up:
Given code set like in the
http://www.mail-archive.com/qemu-devel@nongnu.org/msg246164.html,
launch a VM with virtio-blk disk and writeback rbd backend, fire up
fio, migrate once with libvirt:
time vi
76 matches
Mail list logo