Khoa Huynh <k...@us.ibm.com> writes:

> "Michael S. Tsirkin" <m...@redhat.com> wrote on 11/15/2012 12:48:49 PM:
>
>> From: "Michael S. Tsirkin" <m...@redhat.com>
>> To: Stefan Hajnoczi <stefa...@redhat.com>,
>> Cc: qemu-devel@nongnu.org, Anthony Liguori/Austin/IBM@IBMUS, Paolo
>> Bonzini <pbonz...@redhat.com>, Kevin Wolf <kw...@redhat.com>, Asias
>> He <as...@redhat.com>, Khoa Huynh/Austin/IBM@IBMUS
>> Date: 11/15/2012 12:46 PM
>> Subject: Re: [PATCH 7/7] virtio-blk: add x-data-plane=on|off
>> performance feature
>>
>> On Thu, Nov 15, 2012 at 04:19:06PM +0100, Stefan Hajnoczi wrote:
>> > The virtio-blk-data-plane feature is easy to integrate into
>> > hw/virtio-blk.c.  The data plane can be started and stopped similar to
>> > vhost-net.
>> >
>> > Users can take advantage of the virtio-blk-data-plane feature using the
>> > new -device virtio-blk-pci,x-data-plane=on property.
>> >
>> > The x-data-plane name was chosen because at this stage the feature is
>> > experimental and likely to see changes in the future.
>> >
>> > If the VM configuration does not support virtio-blk-data-plane an error
>> > message is printed.  Although we could fall back to regular virtio-blk,
>> > I prefer the explicit approach since it prompts the user to fix their
>> > configuration if they want the performance benefit of
>> > virtio-blk-data-plane.
>> >
>> > Limitations:
>> >  * Only format=raw is supported
>> >  * Live migration is not supported
>> >  * Block jobs, hot unplug, and other operations fail with -EBUSY
>> >  * I/O throttling limits are ignored
>> >  * Only Linux hosts are supported due to Linux AIO usage
>> >
>> > Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
>>
>>
>> Would be very interested in learning about the performance
>> impact of this. How does this compare to current model and
>> to vhost-blk?
>
> I plan to do a complete evaluation of this patchset in the coming days.
> However, I have done quite a bit of performance testing on earlier versions
> of the data-plane and vhost-blk code bits. Here's what I found:
>
> 1) The existing kvm/qemu code can only handle up to about 150,000 IOPS for
> a single KVM guest.  The bottleneck here is the global qemu mutex.
>
> 2) With performance tuning, I was able to achieve 1.33 million IOPS for a
> single KVM guest with data-plane. This is very close to the
> 1.4-million-IOPS
> limit of my storage setup.

>From my POV, if we can get this close to bare metal with
virtio-blk-dataplane, there's absolutely no reason to merge vhost-blk
support.

We simply lose too much with a kernel-based solution.

I'm sure there's more we can do to improve the userspace implementation
too like a hypercall-based notify and adaptive polling.

Regards,

Anthony Liguori


Reply via email to