Hi Stefan.  

Well, I'm trying to determine which I/O method currently has the very least 
performance overhead and gives the best performance for both reads and writes.

I am doing my testing by putting the entire guest onto a ramdisk.  I'm working 
on an i5-760 with 16GB RAM with VT-d enabled.  I am running the standard Centos 
6 kernel with 0.12.1.2 release of qemu-kvm that comes stock on Centos 6.  The 
guest is configured with 512 MB RAM, using, 4 cpu cores with it's /dev/vda 
being the ramdisk on the host.

I'm not closed to building a custom kernel or kvm if I can get better 
performance reliably.  However, my initial attempts with the 3.3.1 kernel and 
latest kvm gave mixed results.
  
I've been using iozone 3.98 with -O -l32 -i0 -i1 -i2 -e -+n -r4K -s250M to 
measure performance.

So, I was interested in vhost-blk since it seemed like a promising avenue to 
take a look at.  If you have any other thoughts, that would also be helpful.

-Mike



----- Original Message -----
From: "Stefan Hajnoczi" <stefa...@gmail.com>
To: "Michael Baysek" <mbay...@liquidweb.com>
Cc: kvm@vger.kernel.org
Sent: Tuesday, April 10, 2012 4:55:26 AM
Subject: Re: vhost-blk development

On Mon, Apr 9, 2012 at 11:59 PM, Michael Baysek <mbay...@liquidweb.com> wrote:
> Hi all.  I'm interested in any developments on the vhost-blk in kernel 
> accelerator for disk i/o.
>
> I had seen a patchset on LKML https://lkml.org/lkml/2011/7/28/175 but that is 
> rather old.  Are there any newer developments going on with the vhost-blk 
> stuff?

Hi Michael,
I'm curious what you are looking for in vhost-blk.  Are you trying to
improve disk performance for KVM guests?

Perhaps you'd like to share your configuration, workload, and other
details so that we can discuss how to improve performance.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to