On Wed, 07/29 14:03, Paolo Bonzini wrote: > > > On 29/07/2015 13:53, Fam Zheng wrote: > >> > Yes, though I think you'd end up reverting patches 10 and 11 in the end. > > We will add outer disable/enable pairs to prevent another threads's aio_poll > > from sneaking in between bdrv_aio_poll calls, but we needn't obsolete > > bdrv_aio_poll() because of that - it can be useful by itself. For example > > bdrv_aio_cancel shouldn't look at ioeventfd, otherwise it could spin for too > > long on high load. Does that make sense? > > Did you mean bdrv_drain() (when it is not already surrounded by > disable/enable pairs in the caller)? But yes, that makes sense. > > I'm not sure that it makes sense to disable/enable in places such as > bdrv_pread. The caller's block, if any, should suffice. In this sense > you'd end up reverting large parts of patch 10. > > Then you would have to see how many calls to bdrv_aio_poll are still > there, and how many can be converted with no semantic change to aio_poll > (e.g. there's no difference in qemu-img.c), and you'd end up reverting > patches 9 and 11 too. But we can look at that later. >
Okay. There 19 bdrv_aio_poll()'s after this series: block.c bdrv_create block/curl.c curl_init_state block/io.c bdrv_drain block/io.c bdrv_drain_all block/io.c bdrv_prwv_co block/io.c bdrv_get_block_status_above block/io.c bdrv_aio_cancel block/io.c bdrv_flush block/io.c bdrv_discard block/io.c bdrv_flush_io_queue block/nfs.c nfs_get_allocated_file_size block/qed-table.c qed_read_l1_table_sync block/qed-table.c qed_write_l1_table_sync block/qed-table.c qed_read_l2_table_sync block/qed-table.c qed_write_l2_table_sync blockjob.c block_job_finish_sync include/block/block.h bdrv_get_stats qemu-img.c run_block_job qemu-io-cmds.c do_co_write_zeroes qemu-io-cmds.c wait_break_f Most of them make some sense to me, but not many make a real difference. The most important ones should be bdrv_drain* and bdrv_flush, and can be taken care of from caller side. Fam