On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote: > Disabling notifications during virtqueue processing reduces the number of > exits. The virtio-net device already uses virtio_queue_set_notifications() > but > virtio-blk and virtio-scsi do not. > > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits: > > (host)$ qemu-system-x86_64 \ > -enable-kvm -m 1024 -cpu host \ > -drive if=virtio,id=drive0,file=f24.img,format=raw,\ > cache=none,aio=native > (guest)$ fio # jobs=4, iodepth=8, direct=1, randread > (host)$ sudo perf record -a -e kvm:kvm_fast_mmio > > Number of kvm_fast_mmio events: > Unpatched: 685k > Patched: 592k (-15%, lower is better)
Any chance to see a gain in actual benchmark numbers? This is important to make sure we are not just shifting overhead around. > Note that a workload with iodepth=1 and a single thread will not benefit - > this > is a batching optimization. The effect should be strongest with large iodepth > and multiple threads submitting I/O. The guest I/O scheduler also affects the > optimization. > > Stefan Hajnoczi (3): > virtio: add missing vdev->broken check > virtio-blk: suppress virtqueue kick during processing > virtio-scsi: suppress virtqueue kick during processing > > hw/block/virtio-blk.c | 18 ++++++++++++------ > hw/scsi/virtio-scsi.c | 36 +++++++++++++++++++++--------------- > hw/virtio/virtio.c | 4 ++++ > 3 files changed, 37 insertions(+), 21 deletions(-) > > -- > 2.7.4