On Thu, May 05, 2022 at 03:13:46PM +0200, Paolo Bonzini wrote: > Since commit f9fc8932b1 ("thread-posix: remove the posix semaphore > support", 2022-04-06) QemuSemaphore has its own mutex and condition > variable; this adds unnecessary overhead on I/O with small block sizes. > > Check the QTAILQ directly instead of adding the indirection of a > semaphore's count. Using a semaphore has not been necessary since > qemu_cond_timedwait was introduced; the new code has to be careful about > spurious wakeups but it is simpler, for example thread_pool_cancel does > not have to worry about synchronizing the semaphore count with the number > of elements of pool->request_list. > > Note that the return value of qemu_cond_timedwait (0 for timeout, 1 for > signal or spurious wakeup) is different from that of qemu_sem_timedwait > (-1 for timeout, 0 for success). > > Reported-by: Lukáš Doktor <ldok...@redhat.com> > Suggested-by: Stefan Hajnoczi <stefa...@redhat.com> > Signed-off-by: Paolo Bonzini <pbonz...@redhat.com> > --- > util/thread-pool.c | 30 +++++++++++------------------- > 1 file changed, 11 insertions(+), 19 deletions(-)
Thanks for writing the patch so quickly, Paolo! Lukáš: please try this patch to see if it solves the regression. Thanks! > @@ -337,15 +330,14 @@ void thread_pool_free(ThreadPool *pool) > > /* Wait for worker threads to terminate */ > pool->stopping = true; > + qemu_cond_broadcast(&pool->request_cond); > while (pool->cur_threads > 0) { > - qemu_sem_post(&pool->sem); > qemu_cond_wait(&pool->worker_stopped, &pool->lock); > } > > qemu_mutex_unlock(&pool->lock); > > qemu_bh_delete(pool->completion_bh); > - qemu_sem_destroy(&pool->sem); > qemu_cond_destroy(&pool->worker_stopped); > qemu_mutex_destroy(&pool->lock); > g_free(pool); Missing qemu_cond_destroy(&pool->request_cond);? Otherwise: Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
signature.asc
Description: PGP signature