On Wed, 07/29 14:03, Paolo Bonzini wrote: > > > On 29/07/2015 13:53, Fam Zheng wrote: > >> > Yes, though I think you'd end up reverting patches 10 and 11 in the end. > > We will add outer disable/enable pairs to prevent another threads's aio_poll > > from sneaking in between bdrv_aio_poll calls, but we needn't obsolete > > bdrv_aio_poll() because of that - it can be useful by itself. For example > > bdrv_aio_cancel shouldn't look at ioeventfd, otherwise it could spin for too > > long on high load. Does that make sense? > > Did you mean bdrv_drain() (when it is not already surrounded by > disable/enable pairs in the caller)? But yes, that makes sense. > > I'm not sure that it makes sense to disable/enable in places such as > bdrv_pread. The caller's block, if any, should suffice. In this sense > you'd end up reverting large parts of patch 10. > > Then you would have to see how many calls to bdrv_aio_poll are still > there, and how many can be converted with no semantic change to aio_poll > (e.g. there's no difference in qemu-img.c), and you'd end up reverting > patches 9 and 11 too. But we can look at that later.
Another advantage for bdrv_aio_poll() is, in main loop we will not need a separate AioContext in changes like: http://patchwork.ozlabs.org/patch/514968/ Because nested aio_poll will automatically be limited to only process block layer events. My idea is to eventually let main loop use aio_poll, which means we would also move chardev onto it. It would be neat to put all fds of the main thread into a single AioContext. Fam