On Sat, 07/18 22:21, Paolo Bonzini wrote: > It is pretty rare for aio_notify to actually set the EventNotifier. It > can happen with worker threads such as thread-pool.c's, but otherwise it > should never be set thanks to the ctx->notify_me optimization. The > previous patch, unfortunately, added an unconditional call to > event_notifier_test_and_clear; now add a userspace fast path that > avoids the call. > > Note that it is not possible to do the same with event_notifier_set; > it would break, as proved (again) by the included formal model. > > This patch survived over 800 reboots on aarch64 KVM.
For aio-posix, how about keeping the optimization local which doesn't need atomic operation? (no idea for win32 :) diff --git a/aio-posix.c b/aio-posix.c index 5c8b266..7e98123 100644 --- a/aio-posix.c +++ b/aio-posix.c @@ -236,6 +236,7 @@ bool aio_poll(AioContext *ctx, bool blocking) int i, ret; bool progress; int64_t timeout; + int aio_notifier_idx = -1; aio_context_acquire(ctx); progress = false; @@ -256,11 +257,18 @@ bool aio_poll(AioContext *ctx, bool blocking) assert(npfd == 0); /* fill pollfds */ + i = 0; QLIST_FOREACH(node, &ctx->aio_handlers, node) { if (!node->deleted && node->pfd.events) { add_pollfd(node); + if (node->pfd.fd == event_notifier_get_fd(&ctx->notifier)) { + assert(aio_notifier_idx == -1); + aio_notifier_idx = i; + } + i++; } } + assert(aio_notifier_idx != -1); timeout = blocking ? aio_compute_timeout(ctx) : 0; @@ -276,7 +284,9 @@ bool aio_poll(AioContext *ctx, bool blocking) aio_context_acquire(ctx); } - event_notifier_test_and_clear(&ctx->notifier); + if (pollfds[aio_notifier_idx].revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) { + event_notifier_test_and_clear(&ctx->notifier); + } /* if we have any readable fds, dispatch event */ if (ret > 0) {