- Mail original -
De: "Vasiliy Tolstov"
À: "aderumier"
Cc: "qemu-devel"
Envoyé: Mercredi 25 Novembre 2015 11:48:11
Objet: Re: [Qemu-devel] poor virtio-scsi performance (fio testing)
2015-11-25 13:27 GMT+03:00 Alexandre DERUMIER :
> I have tested with
On Wed, 11/25 13:10, Vasiliy Tolstov wrote:
> 2015-11-25 12:35 GMT+03:00 Stefan Hajnoczi :
> > You can get better aio=native performance with qemu.git/master. Please
> > see commit fc73548e444ae3239f6cef44a5200b5d2c3e85d1 ("virtio-blk: use
> > blk_io_plug/unplug for Linux AIO batching").
>
>
> T
2015-11-25 13:27 GMT+03:00 Alexandre DERUMIER :
> I have tested with a raw file, qemu 2.4 + virtio-scsi (without iothread), I'm
> around 25k iops
> with an intel ssd 3500. (host cpu are xeon v3 3,1ghz)
What scheduler you have on host system? May be my default cfq slowdown?
--
Vasiliy Tolstov,
mu-devel"
Envoyé: Mercredi 25 Novembre 2015 11:12:33
Objet: Re: [Qemu-devel] poor virtio-scsi performance (fio testing)
2015-11-25 13:08 GMT+03:00 Alexandre DERUMIER :
> Maybe could you try to create 2 disk in your vm, each with 1 dedicated
> iothread,
>
> then try to run f
2015-11-25 13:08 GMT+03:00 Alexandre DERUMIER :
> Maybe could you try to create 2 disk in your vm, each with 1 dedicated
> iothread,
>
> then try to run fio on both disk at the same time, and see if performance
> improve.
>
Thats fine, but by default i have only one disk inside vm, so i prefer
i
2015-11-25 12:35 GMT+03:00 Stefan Hajnoczi :
> You can get better aio=native performance with qemu.git/master. Please
> see commit fc73548e444ae3239f6cef44a5200b5d2c3e85d1 ("virtio-blk: use
> blk_io_plug/unplug for Linux AIO batching").
Thanks Stefan! Does this patch only for virtio-blk or it ca
or raw file ?
- Mail original -
De: "Vasiliy Tolstov"
À: "qemu-devel"
Envoyé: Jeudi 19 Novembre 2015 09:16:22
Objet: [Qemu-devel] poor virtio-scsi performance (fio testing)
I'm test virtio-scsi on various kernels (with and without scsi-mq)
with deadline io sc
On Thu, Nov 19, 2015 at 11:16:22AM +0300, Vasiliy Tolstov wrote:
> I'm test virtio-scsi on various kernels (with and without scsi-mq)
> with deadline io scheduler (best performance). I'm test with lvm thin
> volume and with sheepdog storage. Data goes to ssd that have on host
> system is about 30K
I'm test virtio-scsi on various kernels (with and without scsi-mq)
with deadline io scheduler (best performance). I'm test with lvm thin
volume and with sheepdog storage. Data goes to ssd that have on host
system is about 30K iops.
When i'm test via fio
[randrw]
blocksize=4k
filename=/dev/sdb
rw=ra