On Tue, Aug 13, 2013 at 03:45:44PM +0200, Jan Kiszka wrote: > On 2013-08-13 15:39, Alex Bligh wrote: > > Jan, > > > > On 13 Aug 2013, at 14:25, Jan Kiszka wrote: > > > >> To my understanding, the use case behind the current behavior is > >> qemu_aio_wait() which is only supposed to block when there are pending > >> requests for the main aio context. We should be able to address this > >> scenarios also in a different way. I would definitely prefer to not > >> depend on that hack above. > > > > I don't *think* so. If I'm right the problem is line 233 of > > aio-posix.c (and similar in the windows variant): > > > > /* No AIO operations? Get us out of here */ > > if (!busy) { > > return progress; > > } > > > > ... do qemu_poll_ns ... > > > > busy is set to true if there are any FDs for which ->flush > > is true and ->io_flush() returns non-zero. > > Right. > > > > > I think this should instead be looking the equivalent of > > FD_ISSET across all FDs (read and write) and the blocking flag. > > IE if blocking is set to true, then it should ALWAYS do > > qemu_poll_ns, lest it busyloop rather than wait for the > > next timer expiry. > > Yes, that would be needed. > > > > > More here: > > https://lists.gnu.org/archive/html/qemu-devel/2013-07/msg03950.html > > > > I'm not very happy with this logic (but it's the same as before), > > and I note Stefan removes the horrible busy flag in this > > series: > > http://lists.nongnu.org/archive/html/qemu-devel/2013-07/msg00092.html > > Yeah: > > - /* No AIO operations? Get us out of here */ > - if (!busy) { > + /* early return if we only have the aio_notify() fd */ > + if (ctx->pollfds->len == 1) { > return progress; > } > > So this is even worse for my use case.
We can change the semantics of aio_poll() so long as we don't break existing callers and tests. It would make sense to do that after merging the io_flush and AioContext timers series. Stefan