That sounds about right. There was some prior discussion about this on the
openstack-operators group with similar results.

We use virtio-scsi in one of our clouds because testing (and production)
has shown that volumes attached via virtio-scsi are better able to
participate in mdadm and zfs. For that particular cloud, that's worth the
performance loss.

On Fri, Mar 31, 2017 at 9:21 AM, shubjero <[email protected]> wrote:

> Hi all,
>
> I've recently done some disk benchmarks (dd and bonnie++) between
> instances with virtio-blk and with virtio-scsi and found that on my test
> bed virtio-scsi performed 11.68% slower for writes and 4.83% slower for
> reads. I wasn't expecting a performance loss with virtio-scsi. Has anyone
> else experienced this? I was looking at using virtio-scsi to gain
> discard/trim support but not if it results in a net loss in disk
> performance.
>
> Test bed details:
> Ubuntu 14.04 LTS
> Kernel: 3.13.0-111-generic
> Libvirt:1.3.1-1ubuntu10.6~cloud0
> qemu:2.5+dfsg-5ubuntu10.5~cloud0
>
> Attached are the detailed results of the benchmarking. I am going to
> pursue some tests with Ubuntu 16.04 as the host.
>
> Thanks,
>
> Jared
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to     : [email protected]
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to