Am 11.03.2025 um 11:11 hat Stefan Hajnoczi geschrieben:
> Allow virtio-scsi virtqueues to be assigned to different IOThreads. This
> makes it possible to take advantage of host multi-queue block layer
> scalability by assigning virtqueues that have affinity with vCPUs to
> different IOThreads that have affinity with host CPUs. The same feature
> was introduced for virtio-blk in the past:
> https://developers.redhat.com/articles/2024/09/05/scaling-virtio-blk-disk-io-iothread-virtqueue-mapping
> 
> Here are fio randread 4k iodepth=64 results from a 4 vCPU guest with an
> Intel P4800X SSD:
> iothreads IOPS
> ------------------------------
> 1         189576
> 2         312698
> 4         346744
> 
> Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>

> @@ -1218,14 +1224,16 @@ static void virtio_scsi_hotplug(HotplugHandler 
> *hotplug_dev, DeviceState *dev,
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
>      VirtIOSCSI *s = VIRTIO_SCSI(vdev);
> +    AioContext *ctx = s->vq_aio_context[0];

At the end of the series, this is always qemu_aio_context...

>      SCSIDevice *sd = SCSI_DEVICE(dev);
> -    int ret;
>  
> -    if (s->ctx && !s->dataplane_fenced) {
> -        ret = blk_set_aio_context(sd->conf.blk, s->ctx, errp);
> -        if (ret < 0) {
> -            return;
> -        }
> +    if (ctx != qemu_get_aio_context() && !s->dataplane_fenced) {
> +        /*
> +         * Try to make the BlockBackend's AioContext match ours. Ignore 
> failure
> +         * because I/O will still work although block jobs and other users
> +         * might be slower when multiple AioContexts use a BlockBackend.
> +         */
> +        blk_set_aio_context(sd->conf.blk, ctx, errp);
>      }

...so this becomes dead code. With multiple AioContexts, it's not clear
which one should be used. virtio-blk just takes the first one. The
equivalent thing here would be to use the one of the first command
queue.

>      if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
> @@ -1260,7 +1268,7 @@ static void virtio_scsi_hotunplug(HotplugHandler 
> *hotplug_dev, DeviceState *dev,
>  
>      qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
>  
> -    if (s->ctx) {
> +    if (s->vq_aio_context[0] != qemu_get_aio_context()) {

Same problem here.

>          /* If other users keep the BlockBackend in the iothread, that's ok */
>          blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
>      }

As you wanted to avoid squashing patches anyway, I think this can be
fixed on top of this series.

Kevin


Reply via email to