On 17/12/2015 01:59, Fam Zheng wrote: > On Wed, 12/16 19:33, Paolo Bonzini wrote: >> When called from a coroutine, bdrv_ioctl must be asynchronous just like >> e.g. bdrv_flush. The code was incorrectly making it synchronous, fix >> it. >> >> Signed-off-by: Paolo Bonzini <pbonz...@redhat.com> >> --- >> Fam, any reason why you did it this way? I don't see >> any coroutine caller, but it doesn't make much sense. :) > > That is a surprising question! From a coroutine, it is bdrv_flush -> > bdrv_flush_co_entry -> bdrv_co_flush, which I think is always synchronous, > especially, noticing the code around calling bs->bdrv_aio_flush: > > acb = bs->drv->bdrv_aio_flush(bs, bdrv_co_io_em_complete, &co); > if (acb == NULL) { > ret = -EIO; > } else { > qemu_coroutine_yield(); > ret = co.ret; > } > > Am I missing something?
In the coroutine case, the yield is hidden in the drivers, and it may or may not happen. For example, qcow2_co_flush_to_os starts with qemu_co_mutex_lock(&s->lock); which can yield. Paolo > Fam > >> >> block/io.c | 7 ++++--- >> 1 file changed, 4 insertions(+), 3 deletions(-) >> >> diff --git a/block/io.c b/block/io.c >> index e00fb5d..841f5b5 100644 >> --- a/block/io.c >> +++ b/block/io.c >> @@ -2614,10 +2614,11 @@ int bdrv_ioctl(BlockDriverState *bs, unsigned long >> int req, void *buf) >> bdrv_co_ioctl_entry(&data); >> } else { >> Coroutine *co = qemu_coroutine_create(bdrv_co_ioctl_entry); >> + >> qemu_coroutine_enter(co, &data); >> - } >> - while (data.ret == -EINPROGRESS) { >> - aio_poll(bdrv_get_aio_context(bs), true); >> + while (data.ret == -EINPROGRESS) { >> + aio_poll(bdrv_get_aio_context(bs), true); >> + } >> } >> return data.ret; >> } >> -- >> 2.5.0 >> > >