On Wed, Apr 27, 2022 at 5:35 PM Stefan Hajnoczi <stefa...@redhat.com> wrote: > > The virtio-scsi event virtqueue is not emptied by its handler function. > This is typical for rx virtqueues where the device uses buffers when > some event occurs (e.g. a packet is received, an error condition > happens, etc). > > Polling non-empty virtqueues wastes CPU cycles. We are not waiting for > new buffers to become available, we are waiting for an event to occur, > so it's a misuse of CPU resources to poll for buffers. > > Introduce the new virtio_queue_aio_attach_host_notifier_no_poll() API, > which is identical to virtio_queue_aio_attach_host_notifier() except > that it does not poll the virtqueue. > > Before this patch the following command-line consumed 100% CPU in the > IOThread polling and calling virtio_scsi_handle_event(): > > $ qemu-system-x86_64 -M accel=kvm -m 1G -cpu host \ > --object iothread,id=iothread0 \ > --device virtio-scsi-pci,iothread=iothread0 \ > --blockdev > file,filename=test.img,aio=native,cache.direct=on,node-name=drive0 \ > --device scsi-hd,drive=drive0 > > After this patch CPU is no longer wasted. > > Reported-by: Nir Soffer <nsof...@redhat.com> > Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com> > --- > include/hw/virtio/virtio.h | 1 + > hw/scsi/virtio-scsi-dataplane.c | 2 +- > hw/virtio/virtio.c | 13 +++++++++++++ > 3 files changed, 15 insertions(+), 1 deletion(-) > > diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h > index b31c4507f5..b62a35fdca 100644 > --- a/include/hw/virtio/virtio.h > +++ b/include/hw/virtio/virtio.h > @@ -317,6 +317,7 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue > *vq); > void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled); > void virtio_queue_host_notifier_read(EventNotifier *n); > void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx); > +void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext > *ctx); > void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx); > VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector); > VirtQueue *virtio_vector_next_queue(VirtQueue *vq); > diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c > index 29575cbaf6..8bb6e6acfc 100644 > --- a/hw/scsi/virtio-scsi-dataplane.c > +++ b/hw/scsi/virtio-scsi-dataplane.c > @@ -138,7 +138,7 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) > > aio_context_acquire(s->ctx); > virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx); > - virtio_queue_aio_attach_host_notifier(vs->event_vq, s->ctx); > + virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx); > > for (i = 0; i < vs->conf.num_queues; i++) { > virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx); > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c > index 9d637e043e..67a873f54a 100644 > --- a/hw/virtio/virtio.c > +++ b/hw/virtio/virtio.c > @@ -3534,6 +3534,19 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue > *vq, AioContext *ctx) > virtio_queue_host_notifier_aio_poll_end); > } > > +/* > + * Same as virtio_queue_aio_attach_host_notifier() but without polling. Use > + * this for rx virtqueues and similar cases where the virtqueue handler > + * function does not pop all elements. When the virtqueue is left non-empty > + * polling consumes CPU cycles and should not be used. > + */ > +void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext > *ctx) > +{ > + aio_set_event_notifier(ctx, &vq->host_notifier, true, > + virtio_queue_host_notifier_read, > + NULL, NULL); > +} > + > void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx) > { > aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL); > -- > 2.35.1 >
I tested patches 1 and 2 on top of 34723f59371f3fd02ea59b94674314b875504426 and it solved the issue. Tested-by: Nir Soffer <nsof...@redhat.com> Nir