ZFS's zvol is really bad for virtual disk backend when you have only few HDDs (or even SSDs) - I had a lot problems with latency and throughput on my 2-disk mirrored testbench. Hard to tell if it's ZFS itself, ZoL or just not enough disks and slow controller.
On the other hand, LVM runs great on single SSD (directsync+native) or even RAID1 HDDs (mdadm; none+native), achieving close to bare metal performance with virtio-scsi. On 07/19/2016 04:12 PM, Jiri 'Ghormoon' Novak wrote: > Hi, > > for quite long time, I've got an issue with my drives configuration on > my VMs that do vga passthrough (but I think it's unrelated to > passthrough as they would lag without passthrough too). > the config is > > -drive > file=/dev/something,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native > -device > virtio-blk-pci,scsi=off,bus=pci.2,addr=0x1,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > > what parameter am I missing so the guest would be waiting for IO and not > the host pausing it until it can provide the IO? > > for some time I thought it was windows problem ,but it also shows on my > linux guest when the IO lacks. > > if it matters, the backend is zfs zvol. that's why especially on HDDs it > hurts a lot. on SSD it's not that bad but still if I load the IO a lot > from multiple sources, it has problem. > > Thanks, > Ghor > > _______________________________________________ > vfio-users mailing list > vfio-users@redhat.com > https://www.redhat.com/mailman/listinfo/vfio-users > _______________________________________________ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users