Am 18.09.2023 um 18:16 hat Stefan Hajnoczi geschrieben: > virtio-blk and virtio-scsi devices need a way to specify the mapping between > IOThreads and virtqueues. At the moment all virtqueues are assigned to a > single > IOThread or the main loop. This single thread can be a CPU bottleneck, so it > is > necessary to allow finer-grained assignment to spread the load. With this > series applied, "pidstat -t 1" shows that guests with -smp 2 or higher are > able > to exploit multiple IOThreads. > > This series introduces command-line syntax for the new iothread-vq-mapping > property is as follows: > > --device > '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0","vqs":[0,1,2]},...]},...' > > IOThreads are specified by name and virtqueues are specified by 0-based > index. > > It will be common to simply assign virtqueues round-robin across a set > of IOThreads. A convenient syntax that does not require specifying > individual virtqueue indices is available: > > --device > '{"driver":"virtio-blk-pci","iothread-vq-mapping":[{"iothread":"iothread0"},{"iothread":"iothread1"},...]},...' > > There is no way to reassign virtqueues at runtime and I expect that to be a > very rare requirement. > > Note that JSON --device syntax is required for the iothread-vq-mapping > parameter because it's non-scalar. > > Based-on: 20230912231037.826804-1-stefa...@redhat.com ("[PATCH v3 0/5] > block-backend: process I/O in the current AioContext")
Does this strictly depend on patch 5/5 of that series, or would it just be a missed opportunity for optimisation by unnecessarily running some requests from a different thread? I suspect it does depend on the other virtio-blk series, though: [PATCH 0/4] virtio-blk: prepare for the multi-queue block layer https://patchew.org/QEMU/20230914140101.1065008-1-stefa...@redhat.com/ Is this right? Given that soft freeze is early next week, maybe we should try to merge just the bare minimum of strictly necessary dependencies. Kevin