On Thu, Nov 17, 2016 at 12:17:57AM +0200, Michael S. Tsirkin wrote: > On Wed, Nov 16, 2016 at 09:53:06PM +0000, Stefan Hajnoczi wrote: > > Disabling notifications during virtqueue processing reduces the number of > > exits. The virtio-net device already uses virtio_queue_set_notifications() > > but > > virtio-blk and virtio-scsi do not. > > > > The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits: > > > > (host)$ qemu-system-x86_64 \ > > -enable-kvm -m 1024 -cpu host \ > > -drive if=virtio,id=drive0,file=f24.img,format=raw,\ > > cache=none,aio=native > > (guest)$ fio # jobs=4, iodepth=8, direct=1, randread > > (host)$ sudo perf record -a -e kvm:kvm_fast_mmio > > > > Number of kvm_fast_mmio events: > > Unpatched: 685k > > Patched: 592k (-15%, lower is better) > > Any chance to see a gain in actual benchmark numbers? > This is important to make sure we are not just > shifting overhead around.
Good idea. I reran this morning without any tracing and compared against bare metal. Total reads for a 30-second 4 KB random read benchmark with 4 processes x iodepth=8: Bare metal: 26440 MB Unpatched: 19799 MB Patched: 21252 MB Patched vs Unpatched: +7% improvement Patched vs Bare metal: 20% virtualization overhead The disk image is a 8 GB raw file on XFS on LVM on dm-crypt on a Samsung MZNLN256HCHP 256 GB SATA SSD. This is just my laptop. Seems like a worthwhile improvement to me. Stefan
signature.asc
Description: PGP signature