From: Liu Yuan
Vhost-blk driver is an in-kernel accelerator, intercepting the
IO requests from KVM virtio-capable guests. It is based on the
vhost infrastructure.
This is supposed to be a module over latest kernel tree, but it
needs some symbols from fs/aio.c and fs/eventfd.c to compile with
From: Liu Yuan
vhost-blk is an in-kernel accelerator for virtio-blk
device. This patch is the counterpart of the vhost-blk
module in the kernel. It basically does setup of the
vhost-blk, pass on the virtio buffer information via
/dev/vhost-blk.
Useage:
$:qemu -drvie file=path/to/image,if=virtio
[design idea]
The vhost-blk uses two kernel threads to handle the guests' requests.
One is tosubmit them via Linux kernel's internal AIO structs, and the other is
signal the guests the completion of the IO requests.
The current qemu-kvm's native AIO in the user mode acctually ju
Hi Stefan
On 07/28/2011 11:44 PM, Stefan Hajnoczi wrote:
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan wrote:
Did you investigate userspace virtio-blk performance? If so, what
issues did you find?
Yes, in the performance table I presented, virtio-blk in the user space
lags behind the vhost
Hi
On 07/29/2011 12:48 PM, Stefan Hajnoczi wrote:
On Thu, Jul 28, 2011 at 4:44 PM, Stefan Hajnoczi wrote:
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan wrote:
Did you investigate userspace virtio-blk performance? If so, what
issues did you find?
I have a hacked up world here that basically
On 07/28/2011 10:47 PM, Christoph Hellwig wrote:
On Thu, Jul 28, 2011 at 10:29:05PM +0800, Liu Yuan wrote:
From: Liu Yuan
Vhost-blk driver is an in-kernel accelerator, intercepting the
IO requests from KVM virtio-capable guests. It is based on the
vhost infrastructure.
This is supposed to be
On 07/29/2011 05:06 PM, Stefan Hajnoczi wrote:
I mean did you investigate *why* userspace virtio-blk has higher
latency? Did you profile it and drill down on its performance?
It's important to understand what is going on before replacing it with
another mechanism. What I'm saying is, if I have
On 07/29/2011 08:50 PM, Stefan Hajnoczi wrote:
I hit a weirdness yesterday, just want to mention it in case you notice it too.
When running vanilla qemu-kvm I forgot to use aio=native. When I
compared the results against virtio-blk-data-plane (which *always*
uses Linux AIO) I was surprised to f
On 07/29/2011 10:45 PM, Liu Yuan wrote:
On 07/29/2011 08:50 PM, Stefan Hajnoczi wrote:
I hit a weirdness yesterday, just want to mention it in case you
notice it too.
When running vanilla qemu-kvm I forgot to use aio=native. When I
compared the results against virtio-blk-data-plane (which
On 07/28/2011 11:22 PM, Michael S. Tsirkin wrote:
On Thu, Jul 28, 2011 at 10:29:05PM +0800, Liu Yuan wrote:
From: Liu Yuan
Vhost-blk driver is an in-kernel accelerator, intercepting the
IO requests from KVM virtio-capable guests. It is based on the
vhost infrastructure.
This is supposed to be
On 07/30/2011 02:12 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I am glad to see that you started looking at vhost-blk. I did an
attempt year ago to improve block
performance using vhost-blk approach.
http://lwn.net/Articles/379864/
http://lwn.net/Articles/382543/
I will take a closer look
On 07/28/2011 11:22 PM, Michael S. Tsirkin wrote:
It would be nicer to reuse the worker infrastructure
from vhost.c. In particular this one ignores cgroups that
the owner belongs to if any.
Does this one do anything that vhost.c doesn't?
The main idea I use a separated thread to handling comp
On 08/01/2011 04:12 PM, Michael S. Tsirkin wrote:
On Mon, Aug 01, 2011 at 02:25:36PM +0800, Liu Yuan wrote:
On 07/28/2011 11:22 PM, Michael S. Tsirkin wrote:
It would be nicer to reuse the worker infrastructure
>from vhost.c. In particular this one ignores cgroups that
the owner belongs
On 08/01/2011 04:17 PM, Avi Kivity wrote:
On 07/29/2011 06:25 PM, Sasha Levin wrote:
On Fri, 2011-07-29 at 20:01 +0800, Liu Yuan wrote:
> Looking at this long list,most are function pointers that can not be
> inlined, and the internal data structures used by these functions are
>
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM).
I ran simple "dd" read tests from the guest on all blo
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM).
I ran simple "dd" read tests from the guest on all blo
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest
On 08/08/2011 01:04 PM, Badari Pulavarty wrote:
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch
On 08/09/2011 01:16 AM, Badari Pulavarty wrote:
On 8/8/2011 12:31 AM, Liu Yuan wrote:
On 08/08/2011 01:04 PM, Badari Pulavarty wrote:
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_ALIGN |
SLAB_PANIC);
This is weird. how do you open multiple device?I just opened the device
with following command:
-drive file=/dev/sda
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_ALIGN |
SLAB_PANIC);
This is weird. how do you open multiple device?I just opened the
device
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_c
On 08/13/2011 12:12 AM, Badari Pulavarty wrote:
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't
On 08/15/2011 12:17 PM, Badari Pulavarty wrote:
On 8/14/2011 8:20 PM, Liu Yuan wrote:
On 08/13/2011 12:12 AM, Badari Pulavarty wrote:
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan
On 08/15/2011 12:17 PM, Badari Pulavarty wrote:
On 8/14/2011 8:20 PM, Liu Yuan wrote:
On 08/13/2011 12:12 AM, Badari Pulavarty wrote:
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan
On 04/12/2012 12:52 AM, Michael Baysek wrote:
> In this particular case, I did intend to deploy these instances directly to
> the ramdisk. I want to squeeze every drop of performance out of these
> instances for use cases with lots of concurrent accesses. I thought it
> would be possible to
On 04/20/2012 04:26 AM, Michael Baysek wrote:
> Can you point me to the latest revision of the code and provide some
> guidance on how to test it? I really would love to see if it helps.
There is no latest revision, I didn't continue the development when I
saw the sign that it wouldn't be acce
From: Liu Yuan
Function ioapic_debug() in the ioapic_deliver() misnames
one filed by reference. This patch correct it.
Signed-off-by: Liu Yuan
---
virt/kvm/ioapic.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
index 0b9df83
29 matches
Mail list logo