Hi, Stefan: Follow your advice, I have finished the benchmarks with multiple vcpu (smp) and parallel I/O workloads. The results still show that the performance of disk with dataplane enabled did not have advantage over non-dataplane under Random write mode. But under the Sequence write mode, the former has obvious advantage.
1. Environment: a). Qemu 1.4 master branch b). kernel: 3.5.0-2.fc17.x86_64 c). virtual disks location: the same local SATA harddisk with ext4 fs d). VM start cmd (os: win7/qed, disk1: raw/non-dataplane/10G/NTFS, disk2: raw/dataplane/10G/NTFS) : e). vcpu: 4 ./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -smp 4 -name win7 -M pc-0.15 -m 1024 -boot c -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -monitor stdio -drive file=/home/win7.qed,if=none,format=qed,cache=none,id=drive0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -chardev pty,id=charchannel3 -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel3,id=channel3,name=arbitrary.virtio.serial.port.name -usb -device usb-tablet,id=input0 -spice port=3007,addr=186.100.8.121,disable-ticketing -vga qxl -global qxl.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-blk-pci,drive=drive0,bus=pci.0,addr=0x6,bootindex=1 -drive id=drive1,if=none,cache=none,format=raw,file=/home/data.img -device virtio-blk-pci,drive=drive1,bus=pci.0,addr=0x7 -drive id=drive2,if=none,cache=none,format=raw,file=/home/data2.img,aio=native -device virtio-blk-pci,drive=drive2,bus=pci.0,addr=0x8,scsi=off,x-data-plane=on,config-wce=off 2. Testing Tool And Testing Params: a). Only IOMeter App Running in VM, and no other IO in Host (Dom0) b). 100% Random, 0% Read, 16K data size, 50 outstanding IO; 0% Random, 25% Read, 16K data size, 50 outstanding IO c). Testing of two disks separately, not simultaneously 3. Testing Results: RW_mode IOPS_dataplane IOPS_non_dataplane MBPS_dataplane MBPS_non_dataplane 100% Random /0% Read 303.178867 300.511928 4.737170 4.695499 100% Sequence/25% Read 21748.887189 7631.164060 339.826362 119.236938 ---------- Leiqzhang Best Regards > -----邮件原件----- > 发件人: Stefan Hajnoczi [mailto:stefa...@redhat.com] > 发送时间: 2013年4月4日 15:21 > 收件人: Zhangleiqiang > 抄送: Stefan Hajnoczi; qemu-devel@nongnu.org; leiqzhang; Haofeng; Luohao > (brian) > 主题: Re: 答复: [Qemu-devel] question about performance of dataplane > > On Tue, Apr 02, 2013 at 02:02:54AM +0000, Zhangleiqiang wrote: > > I have also finished the perf testing under Fedora 17 using IOZone, and > > the > results also shown that the performance of disk with dataplane enabled did not > have advantage over non-dataplane. > > virtio-blk data plane is a win for parallel I/O workloads (that means > iodepth > 1). The advantage becomes clearer with SMP guests. > > In other words the big advantage is that data plane processes requests > without blocking the QEMU main loop or vCPU threads. > > If your guest has 1 vCPU and/or your benchmarks only do a single stream > of I/O requests, then the difference may not be measurable. > > Stefan