RFC because changing the coroutine code is scary and I'm new to it. Stressing the VDI code with qemu-img:
qemu-img convert -p -W -m 16 -O vdi input.qcow2 output.vdi leads to a hang relatively quickly on a machine with sufficient CPUs. A similar test targetting either raw or qcow2 formats, or avoiding out-of-order writes, completes fine. At the point of the hang all of the coroutines are sitting in qemu_co_queue_wait_impl(), called from either qemu_co_rwlock_rdlock() or qemu_co_rwlock_upgrade(), all referencing the same CoRwlock (BDRVVdiState.bmap_lock). The comment in the last patch explains what I believe is happening - downgrading an rwlock from write to read can later result in a failure to schedule an appropriate coroutine when the read lock is released. A less invasive change might be to simply have the read side of the unlock code mark *all* queued coroutines as runnable. This seems somewhat wasteful, as any read hopefuls that run before a write hopeful will immediately put themselves back on the queue. No code other than block/vdi.c appears to use qemu_co_rwlock_downgrade(). The block/vdi.c changes are small things noticed by inspection when looking for the cause of the hang. David Edmondson (4): block/vdi: When writing new bmap entry fails, don't leak the buffer block/vdi: Don't assume that blocks are larger than VdiHeader coroutine/mutex: Store the coroutine in the CoWaitRecord only once coroutine/rwlock: Wake writers in preference to readers block/vdi.c | 11 +++++++---- include/qemu/coroutine.h | 8 +++++--- util/qemu-coroutine-lock.c | 25 +++++++++++++++---------- 3 files changed, 27 insertions(+), 17 deletions(-) -- 2.30.1