On Thu, Jun 25, 2020 at 08:48:36PM +0200, Philippe Mathieu-Daudé wrote: > To be able to use multiple queues on the same hardware, > we need to have each queue able to receive IRQ notifications > in the correct AIO context. > The context has to be proper to each queue, not to the block > driver. Move aio_context from BDRVNVMeState to NVMeQueuePair. > > Signed-off-by: Philippe Mathieu-Daudé <phi...@redhat.com> > --- > RFC because I'm not familiar with AIO context
To keep things simple I suggest only doing Step 1 in this patch series. Step 1: The existing irq_notifier handler re-enters the request coroutine from a BH scheduled in the BlockDriverState's AioContext. It doesn't matter where the irq_notifier is handled, the completions will run in their respective BlockDriverState AioContexts. This means that two BlockDriverStates with different AioContexts sharing a single hardware state will work correctly with just a single hardware queue. Therefore multiqueue support is not required to support multiple BDSes with different AioContexts. Step 2: Better performance can be achieved by creating multiple hardware queuepairs, each with its own irq_notifier. During request submission a int queue_idx_from_aio_context(AioContext *ctx) mapping function selects a hardware queue. Hopefully that hardware queue's irq_notifier is handled in the same Aiocontext for best performance, but there might be cases where there are more BDS AioContexts than nvme hw queues. Step 3: When the QEMU block layer has multiqueue support then we'll no longer map the BlockDriverState AioContext to a queue index but instead use qemu_get_current_aio_context(). At this point a single BDS can process I/O in multiple AioContexts and hardware queuepairs.
signature.asc
Description: PGP signature