Paolo,

--On 30 July 2013 11:17:05 +0200 Paolo Bonzini <pbonz...@redhat.com> wrote:

Hmm, do we even need clock->using at this point?  For example:

   qemu_clock_enable()
   {
       clock->enabled = enabled;
       ...
       if (!enabled) {
           /* If another thread is within qemu_run_timers,
            * wait for it to finish.
            */
           qemu_event_wait(&clock->callbacks_done_event);
       }
   }

   qemu_run_timers()
   {
       qemu_event_reset(&clock->callbacks_done_event);
       if (!clock->enabled) {
           goto out;
       }
       ...
   out:
       qemu_event_set(&eclock->callbacks_done_event);
   }

In the fast path this only does two atomic operations (an OR for reset,
and XCHG for set).

I think I'm missing something here. If Pingfan is rebasing on top
of my patches, or is doing vaguely the same thing, then
qemu_clock_enable will do two things:

1. It will will walk the list of QEMUTimerLists
2. For each QemuTimerList associated with the clock, it will call
  qemu_notify or aio_notify

The list of QEMUTimerLists is only accessed by qemu_clock_enable
(to do task 2) and by the creation and deletion of QEMUTimerLists,
which happen only in init and AioContext create/delete (which should
be very rare). I'm presuming qemu_clock_enable(false) is also
pretty rare. It seems to me there is no need to do anything more
complex than a simple mutex protecting the list of QEMUTimerLists of
each QEMUClock.

As far as walking the QEMUTimerList itself is concerned, this is
something which is 99.999% done by the thread owning the AioContext.
qemu_clock_enable should not even be walking this list. So I don't
see why the protection here is needed.

--
Alex Bligh

Reply via email to