In my (not particularly significant) experience that question never
seems to produce a consistent answer and certain drive performance has
noticeably changed for the better in recent versions so it's all a
moving target. I do know if=virtio is considered the legacy approach vs
if=none because even though it is usually identical it doesn't create
implicit controllers.
Unless somebody up to date saves us I think you will have to test it
yourself. I recently did the least amount of testing possible and was
surprised to find the ide driver on qemu 2.5 and recent OVMF to be near
bare metal with an AHCI controller (it is basically passthrough AHCI now
I think).
-drive
if=none,format=qcow2,cache.direct=on,file=$KUBIMG,aio=native,id=windrive
-device ide-hd,bus=ide.0,drive=windrive,bootindex=0
But the newest and clearest winner for me is block device virtio on raw
disks / block devices. It performed exactly like bare metal with the
intel rapid store drivers on random 4k (80Mb/s), queues and throughput
but I haven't taken the time to pull it apart or repeat the test. IIt
uses msi-x, request combining, multiqueues and iothreads which makes it
my official guess for the latest and greatest.
-drive
if=none,format=raw,cache=none,cache.direct=on,file=/dev/sdb,aio=native,id=ssd2,discard=off,detect-zeroes=off"
-object iothread,id=iothread2"
-device
virtio-blk-pci,drive=ssd2,request-merging=on,iothread=iothread2,modern-pio-notify=on,config-wce=off
I did find a write up describing it's development but I can't seem to
find it again (it's a scsi controller). Not much written about it but I
can give you my start up scripts / versions / performance stats if you
try them and they are stupid slow. Try a recent qemu too if you don't
already.
On 02/04/16 21:37, Nick Sarnie wrote:
Hi guys,
It seems the biggest limiting factor of my GPU Passthrough setup is
the disk. Does anyone have any tips to optimize it? Also, which is
faster, having the disk with if=virtio, or having the disk with
if=none and a virtio SCSI controller. The raw image is 250GB on a
500GB ext4 hard drive, which is encrypted with dm-crypt. Below is my
current script.
Thanks,
Sarnex
#!/bin/sh
export QEMU_AUDIO_DRV=pa
qemu-system-x86_64 -enable-kvm \
-m 5120 \
-cpu host \
-smp 8,sockets=1,cores=8,threads=1 \
-device vfio-pci,host=01:00.0,x-vga=on,multifunction=on \
-device vfio-pci,host=01:00.1 \
-vga none \
-drive
file=/media/500GB/win10.img,id=disk,if=virtio,cache=none,format=raw \
-device vfio-pci,host=00:12.0 \
-device vfio-pci,host=00:12.2 \
-device vfio-pci,host=00:16.0 \
-device vfio-pci,host=00:16.2 \
-soundhw ac97 \
-rtc base=localtime \
-netdev user,id=net0 \
-device virtio-net-pci,netdev=net0 \
-device virtio-scsi-pci,id=scsi
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users
On 02/04/16 21:37, Nick Sarnie wrote:
Hi guys,
It seems the biggest limiting factor of my GPU Passthrough setup is
the disk. Does anyone have any tips to optimize it? Also, which is
faster, having the disk with if=virtio, or having the disk with
if=none and a virtio SCSI controller. The raw image is 250GB on a
500GB ext4 hard drive, which is encrypted with dm-crypt. Below is my
current script.
Thanks,
Sarnex
#!/bin/sh
export QEMU_AUDIO_DRV=pa
qemu-system-x86_64 -enable-kvm \
-m 5120 \
-cpu host \
-smp 8,sockets=1,cores=8,threads=1 \
-device vfio-pci,host=01:00.0,x-vga=on,multifunction=on \
-device vfio-pci,host=01:00.1 \
-vga none \
-drive
file=/media/500GB/win10.img,id=disk,if=virtio,cache=none,format=raw \
-device vfio-pci,host=00:12.0 \
-device vfio-pci,host=00:12.2 \
-device vfio-pci,host=00:16.0 \
-device vfio-pci,host=00:16.2 \
-soundhw ac97 \
-rtc base=localtime \
-netdev user,id=net0 \
-device virtio-net-pci,netdev=net0 \
-device virtio-scsi-pci,id=scsi
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users