On Mon, Jul 29, 2013 at 10:58:40AM +0200, Kevin Wolf wrote:
> Am 26.07.2013 um 10:43 hat Stefan Hajnoczi geschrieben:
> > On Thu, Jul 25, 2013 at 07:53:33PM +0100, Alex Bligh wrote:
> > >
> > >
> > > --On 25 July 2013 14:32:59 +0200 Jan Kiszka
> > > wrote:
> > >
> > > >>I would happily at a QE
Il 29/07/2013 10:58, Kevin Wolf ha scritto:
> Am 26.07.2013 um 10:43 hat Stefan Hajnoczi geschrieben:
>> On Thu, Jul 25, 2013 at 07:53:33PM +0100, Alex Bligh wrote:
>>>
>>>
>>> --On 25 July 2013 14:32:59 +0200 Jan Kiszka wrote:
>>>
> I would happily at a QEMUClock of each type to AioContext. T
--On 29 July 2013 10:58:40 +0200 Kevin Wolf wrote:
But considering your first paragraph, why is it safe to let block jobs
running while we're migrating? Do we really do that? It sounds unsafe to
me.
If I remember right, the sending end now does the bdrv_close before
the receiving end does t
Am 26.07.2013 um 10:43 hat Stefan Hajnoczi geschrieben:
> On Thu, Jul 25, 2013 at 07:53:33PM +0100, Alex Bligh wrote:
> >
> >
> > --On 25 July 2013 14:32:59 +0200 Jan Kiszka wrote:
> >
> > >>I would happily at a QEMUClock of each type to AioContext. They are after
> > >>all pretty lightweight.
Jan,
--On 26 July 2013 12:05:06 +0200 Jan Kiszka wrote:
I would happily at a QEMUClock of each type to AioContext. They are
after
all pretty lightweight.
What's the point of adding tones of QEMUClock instances? Considering
proper abstraction, how are they different for each AioContext? Will
On 2013-07-25 20:53, Alex Bligh wrote:
>
>
> --On 25 July 2013 14:32:59 +0200 Jan Kiszka wrote:
>
>>> I would happily at a QEMUClock of each type to AioContext. They are
>>> after
>>> all pretty lightweight.
>>
>> What's the point of adding tones of QEMUClock instances? Considering
>> proper ab
Il 26/07/2013 11:08, Alex Bligh ha scritto:
>
>
> --On 26 July 2013 10:43:45 +0200 Stefan Hajnoczi
> wrote:
>
>> block.c and block/qed.c use vm_clock because block drivers should not do
>> guest I/O while the vm is stopped. This is especially true during live
>> migration where it's important
--On 26 July 2013 10:43:45 +0200 Stefan Hajnoczi wrote:
block.c and block/qed.c use vm_clock because block drivers should not do
guest I/O while the vm is stopped. This is especially true during live
migration where it's important to hand off the image file from the
source host to the destin
On Thu, Jul 25, 2013 at 07:53:33PM +0100, Alex Bligh wrote:
>
>
> --On 25 July 2013 14:32:59 +0200 Jan Kiszka wrote:
>
> >>I would happily at a QEMUClock of each type to AioContext. They are after
> >>all pretty lightweight.
> >
> >What's the point of adding tones of QEMUClock instances? Consid
--On 25 July 2013 14:32:59 +0200 Jan Kiszka wrote:
I would happily at a QEMUClock of each type to AioContext. They are after
all pretty lightweight.
What's the point of adding tones of QEMUClock instances? Considering
proper abstraction, how are they different for each AioContext? Will
they
On 2013-07-25 15:31, Stefan Hajnoczi wrote:
> On Thu, Jul 25, 2013 at 3:06 PM, Jan Kiszka wrote:
>> On 2013-07-25 15:02, Paolo Bonzini wrote:
>>> Il 25/07/2013 14:48, Jan Kiszka ha scritto:
The concept of clocks (with start/stop property) and active timers shall
not be mixed, they are in
On Thu, Jul 25, 2013 at 3:06 PM, Jan Kiszka wrote:
> On 2013-07-25 15:02, Paolo Bonzini wrote:
>> Il 25/07/2013 14:48, Jan Kiszka ha scritto:
>>> The concept of clocks (with start/stop property) and active timers shall
>>> not be mixed, they are independent.
>>
>> Are you referring to this in part
Il 25/07/2013 14:48, Jan Kiszka ha scritto:
> The concept of clocks (with start/stop property) and active timers shall
> not be mixed, they are independent.
Are you referring to this in particular:
void pause_all_vcpus(void)
{
CPUState *cpu = first_cpu;
qemu_clock_enable(vm_clock, false
On 2013-07-25 15:02, Paolo Bonzini wrote:
> Il 25/07/2013 14:48, Jan Kiszka ha scritto:
>> The concept of clocks (with start/stop property) and active timers shall
>> not be mixed, they are independent.
>
> Are you referring to this in particular:
>
> void pause_all_vcpus(void)
> {
> CPUStat
Il 25/07/2013 14:38, Jan Kiszka ha scritto:
> On 2013-07-25 14:35, Paolo Bonzini wrote:
>> Il 25/07/2013 14:32, Jan Kiszka ha scritto:
>>> On 2013-07-25 14:21, Alex Bligh wrote:
--On 25 July 2013 14:05:30 +0200 Stefan Hajnoczi
wrote:
> Alex Bligh's series gives each Ai
On 2013-07-25 14:41, Stefan Hajnoczi wrote:
> On Thu, Jul 25, 2013 at 2:38 PM, Jan Kiszka wrote:
>> On 2013-07-25 14:35, Paolo Bonzini wrote:
>>> Il 25/07/2013 14:32, Jan Kiszka ha scritto:
On 2013-07-25 14:21, Alex Bligh wrote:
>
>
> --On 25 July 2013 14:05:30 +0200 Stefan Hajnoc
On Thu, Jul 25, 2013 at 2:38 PM, Jan Kiszka wrote:
> On 2013-07-25 14:35, Paolo Bonzini wrote:
>> Il 25/07/2013 14:32, Jan Kiszka ha scritto:
>>> On 2013-07-25 14:21, Alex Bligh wrote:
--On 25 July 2013 14:05:30 +0200 Stefan Hajnoczi
wrote:
> Alex Bligh's series gives
On 2013-07-25 14:35, Paolo Bonzini wrote:
> Il 25/07/2013 14:32, Jan Kiszka ha scritto:
>> On 2013-07-25 14:21, Alex Bligh wrote:
>>>
>>>
>>> --On 25 July 2013 14:05:30 +0200 Stefan Hajnoczi
>>> wrote:
>>>
Alex Bligh's series gives each AioContext its own rt_clock. This avoids
the need
Il 25/07/2013 14:32, Jan Kiszka ha scritto:
> On 2013-07-25 14:21, Alex Bligh wrote:
>>
>>
>> --On 25 July 2013 14:05:30 +0200 Stefan Hajnoczi
>> wrote:
>>
>>> Alex Bligh's series gives each AioContext its own rt_clock. This avoids
>>> the need for synchronization in the simple case. If we requi
On 2013-07-25 14:21, Alex Bligh wrote:
>
>
> --On 25 July 2013 14:05:30 +0200 Stefan Hajnoczi
> wrote:
>
>> Alex Bligh's series gives each AioContext its own rt_clock. This avoids
>> the need for synchronization in the simple case. If we require timer
>> access between threads then we really
--On 25 July 2013 14:05:30 +0200 Stefan Hajnoczi wrote:
Alex Bligh's series gives each AioContext its own rt_clock. This avoids
the need for synchronization in the simple case. If we require timer
access between threads then we really need to synchronize.
You pointed out in another email t
On Sun, Jul 21, 2013 at 04:42:57PM +0800, Liu Ping Fan wrote:
> Currently, the timers run on iothread within BQL, so virtio-block dataplane
> can not use throttle,
> as Stefan Hajnoczi pointed out in his patches to port dataplane onto block
> layer.(Thanks, Stefan)
> To enable this feature, I pla
On 2013-07-25 13:44, Stefan Hajnoczi wrote:
> On Tue, Jul 23, 2013 at 10:51:06AM +0800, liu ping fan wrote:
>> On Mon, Jul 22, 2013 at 2:28 PM, Jan Kiszka wrote:
>>> On 2013-07-22 06:38, liu ping fan wrote:
On Sun, Jul 21, 2013 at 5:53 PM, Alex Bligh wrote:
> Liu,
>
>
> --On
On Mon, Jul 22, 2013 at 06:18:03PM +0800, liu ping fan wrote:
> On Mon, Jul 22, 2013 at 5:40 PM, Alex Bligh wrote:
> > Liu,
> >
> >
> > --On 22 July 2013 12:38:02 +0800 liu ping fan wrote:
> >
> >> I read your second series, and try to summary the main different between
> >> us. Please correct me
On Tue, Jul 23, 2013 at 10:51:06AM +0800, liu ping fan wrote:
> On Mon, Jul 22, 2013 at 2:28 PM, Jan Kiszka wrote:
> > On 2013-07-22 06:38, liu ping fan wrote:
> >> On Sun, Jul 21, 2013 at 5:53 PM, Alex Bligh wrote:
> >>> Liu,
> >>>
> >>>
> >>> --On 21 July 2013 16:42:57 +0800 Liu Ping Fan wrote
Il 24/07/2013 10:37, Alex Bligh ha scritto:
>
>
> --On 24 July 2013 09:01:22 +0100 Alex Bligh wrote:
>
Most 'reasonable' POSIX compliant operating systems have ppoll
>>>
>>> Really? I could find no manpages for any of Solaris and *BSD.
>>
>> OK I shall (re)research that then! I suppose se
--On 24 July 2013 09:01:22 +0100 Alex Bligh wrote:
Most 'reasonable' POSIX compliant operating systems have ppoll
Really? I could find no manpages for any of Solaris and *BSD.
OK I shall (re)research that then! I suppose select() / pselect() is
an alternative when there are few FDs.
Lo
[...]
>> http://social.msdn.microsoft.com/Forums/vstudio/en-US/e8a7cb1e-9edd-4ee3-982e-f66b7bf6ae44/improve-accuracy-waitforsingleobject
>>
>> suggest that WaitFor{Single,Multiple}Objects can have pretty
>> appalling latency anyway (100ms!), and there's no evidence that's
>> limited by making one o
Il 24/07/2013 10:01, Alex Bligh ha scritto:
>>>
>>
>> Part of it should be fixed by os_setup_early_signal_handling.
>>
>> This is corroborated by the fact that without
>> os_setup_early_signal_handling Wine always works, and Windows breaks.
>
> This:
> http://www.windowstimestamp.com/description
Paolo,
--On 24 July 2013 09:54:57 +0200 Paolo Bonzini wrote:
Alex, can you add it to your series? (Note that you must set a timer
slack of 1, because 0 is interpreted as "default").
Sure, will do. I'm guessing I'll have to look for that inside configure
as well.
--
Alex Bligh
Paolo,
--On 24 July 2013 09:43:28 +0200 Paolo Bonzini wrote:
Most 'reasonable' POSIX compliant operating systems have ppoll
Really? I could find no manpages for any of Solaris and *BSD.
OK I shall (re)research that then! I suppose select() / pselect() is
an alternative when there are few
Il 24/07/2013 09:43, liu ping fan ha scritto:
> Paid some time to dig the kernel code, and find out that the
> resolution lost by timeout of poll/select..etc is cause by the timeout
> is a slack region.
> See code in
> do_poll()
>if (!poll_schedule_timeout(wait, TASK_INTERRUPTIBLE, to, slack))
On Wed, Jul 24, 2013 at 2:42 PM, Paolo Bonzini wrote:
> Il 24/07/2013 03:28, liu ping fan ha scritto:
>> On Tue, Jul 23, 2013 at 6:30 PM, Paolo Bonzini wrote:
>>> > Il 23/07/2013 04:53, liu ping fan ha scritto:
>> The scenior I can figure out is if adopting timeout of poll, then when
>>
Il 24/07/2013 09:31, Alex Bligh ha scritto:
>
>
> --On 24 July 2013 08:42:26 +0200 Paolo Bonzini wrote:
>
>> With ppoll, is this true or just hearsay?
>>
>> (Without ppoll, indeed setitimer has 1 us resolution while poll has 1
>> ms; too bad that select has other problems, because select has al
--On 24 July 2013 08:42:26 +0200 Paolo Bonzini wrote:
With ppoll, is this true or just hearsay?
(Without ppoll, indeed setitimer has 1 us resolution while poll has 1
ms; too bad that select has other problems, because select has also 1 us
resolution).
Most 'reasonable' POSIX compliant oper
Il 24/07/2013 03:28, liu ping fan ha scritto:
> On Tue, Jul 23, 2013 at 6:30 PM, Paolo Bonzini wrote:
>> > Il 23/07/2013 04:53, liu ping fan ha scritto:
>>> >> The scenior I can figure out is if adopting timeout of poll, then when
>>> >> changing the deadline, we need to invoke poll, and set the n
On Tue, Jul 23, 2013 at 6:30 PM, Paolo Bonzini wrote:
> Il 23/07/2013 04:53, liu ping fan ha scritto:
>> The scenior I can figure out is if adopting timeout of poll, then when
>> changing the deadline, we need to invoke poll, and set the new
>> timeout, right?
>
> Yes, you need to call aio_notify
--On 23 July 2013 10:53:26 +0800 liu ping fan wrote:
Firstly, I can't see the advantage of keeping the alarm_timer stuff
around at all if we can delete it. Save, of course, that on systems
that don't have ppoll or equivalent you lose sub-millisecond timing by
deleting them.
The scenior I ca
Il 23/07/2013 04:53, liu ping fan ha scritto:
> The scenior I can figure out is if adopting timeout of poll, then when
> changing the deadline, we need to invoke poll, and set the new
> timeout, right?
Yes, you need to call aio_notify so that poll is reinvoked.
Paolo
On Mon, Jul 22, 2013 at 6:18 PM, liu ping fan wrote:
> On Mon, Jul 22, 2013 at 5:40 PM, Alex Bligh wrote:
>> Liu,
>>
>>
>> --On 22 July 2013 12:38:02 +0800 liu ping fan wrote:
>>
>>> I read your second series, and try to summary the main different between
>>> us. Please correct me, if I misunder
On Mon, Jul 22, 2013 at 2:28 PM, Jan Kiszka wrote:
> On 2013-07-22 06:38, liu ping fan wrote:
>> On Sun, Jul 21, 2013 at 5:53 PM, Alex Bligh wrote:
>>> Liu,
>>>
>>>
>>> --On 21 July 2013 16:42:57 +0800 Liu Ping Fan wrote:
>>>
Currently, the timers run on iothread within BQL, so virtio-block
On Mon, Jul 22, 2013 at 5:40 PM, Alex Bligh wrote:
> Liu,
>
>
> --On 22 July 2013 12:38:02 +0800 liu ping fan wrote:
>
>> I read your second series, and try to summary the main different between
>> us. Please correct me, if I misunderstood something.
>> --1st. You try to create a separate QemuClo
Liu,
--On 22 July 2013 12:38:02 +0800 liu ping fan wrote:
I read your second series, and try to summary the main different between
us. Please correct me, if I misunderstood something.
--1st. You try to create a separate QemuClock for AioContext.
I think QemuClock is the clock event source
On 2013-07-22 06:38, liu ping fan wrote:
> On Sun, Jul 21, 2013 at 5:53 PM, Alex Bligh wrote:
>> Liu,
>>
>>
>> --On 21 July 2013 16:42:57 +0800 Liu Ping Fan wrote:
>>
>>> Currently, the timers run on iothread within BQL, so virtio-block
>>> dataplane can not use throttle, as Stefan Hajnoczi point
On Sun, Jul 21, 2013 at 5:53 PM, Alex Bligh wrote:
> Liu,
>
>
> --On 21 July 2013 16:42:57 +0800 Liu Ping Fan wrote:
>
>> Currently, the timers run on iothread within BQL, so virtio-block
>> dataplane can not use throttle, as Stefan Hajnoczi pointed out in his
>> patches to port dataplane onto bl
Liu,
--On 21 July 2013 16:42:57 +0800 Liu Ping Fan wrote:
Currently, the timers run on iothread within BQL, so virtio-block
dataplane can not use throttle, as Stefan Hajnoczi pointed out in his
patches to port dataplane onto block layer.(Thanks, Stefan) To enable
this feature, I plan to enable
Currently, the timers run on iothread within BQL, so virtio-block dataplane can
not use throttle,
as Stefan Hajnoczi pointed out in his patches to port dataplane onto block
layer.(Thanks, Stefan)
To enable this feature, I plan to enable timers to run on AioContext's thread.
And maybe in future, h
47 matches
Mail list logo