On 11/08/2015 22:12, Alex Bennée wrote:
Paolo Bonzini <pbonz...@redhat.com> writes:

On 10/08/2015 17:27, fred.kon...@greensocs.com wrote:
  void qemu_mutex_lock_iothread(void)
  {
-    atomic_inc(&iothread_requesting_mutex);
-    /* In the simple case there is no need to bump the VCPU thread out of
-     * TCG code execution.
-     */
-    if (!tcg_enabled() || qemu_in_vcpu_thread() ||
-        !first_cpu || !first_cpu->thread) {
-        qemu_mutex_lock(&qemu_global_mutex);
-        atomic_dec(&iothread_requesting_mutex);
-    } else {
-        if (qemu_mutex_trylock(&qemu_global_mutex)) {
-            qemu_cpu_kick_thread(first_cpu);
-            qemu_mutex_lock(&qemu_global_mutex);
-        }
-        atomic_dec(&iothread_requesting_mutex);
-        qemu_cond_broadcast(&qemu_io_proceeded_cond);
-    }
-    iothread_locked = true;
"iothread_locked = true" must be kept.  Otherwise... yay! :)

Also if qemu_cond_broadcast(&qemu_io_proceeded_cond) is being dropped
there is no point keeping the guff around in qemu_tcg_wait_io_event.

Yes good point.

BTW this leads to high consumption of host CPU eg: 100% per VCPU thread as
the VCPUs thread are no longer waiting for qemu_io_proceeded_cond.

Fred

Reply via email to