On 11/16/2016 10:53 PM, Stefan Hajnoczi wrote: > Disabling notifications during virtqueue processing reduces the number of > exits. The virtio-net device already uses virtio_queue_set_notifications() > but > virtio-blk and virtio-scsi do not. > > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits: > > (host)$ qemu-system-x86_64 \ > -enable-kvm -m 1024 -cpu host \ > -drive if=virtio,id=drive0,file=f24.img,format=raw,\ > cache=none,aio=native > (guest)$ fio # jobs=4, iodepth=8, direct=1, randread > (host)$ sudo perf record -a -e kvm:kvm_fast_mmio > > Number of kvm_fast_mmio events: > Unpatched: 685k > Patched: 592k (-15%, lower is better) > > Note that a workload with iodepth=1 and a single thread will not benefit - > this > is a batching optimization. The effect should be strongest with large iodepth > and multiple threads submitting I/O. The guest I/O scheduler also affects the > optimization.
I have trouble seeing any difference in terms of performances or CPU load (other than a reduced number of kicks). I was expecting some benefit by reducing the spinlock hold times in virtio-blk, but this needs some more setups to actually find the sweet spot. Maybe it will show its benefit with the polling thing? > > Stefan Hajnoczi (3): > virtio: add missing vdev->broken check > virtio-blk: suppress virtqueue kick during processing > virtio-scsi: suppress virtqueue kick during processing > > hw/block/virtio-blk.c | 18 ++++++++++++------ > hw/scsi/virtio-scsi.c | 36 +++++++++++++++++++++--------------- > hw/virtio/virtio.c | 4 ++++ > 3 files changed, 37 insertions(+), 21 deletions(-) >