On 02/12/2015 06:13, Ming Lin wrote:
> On Tue, 2015-12-01 at 11:59 -0500, Paolo Bonzini wrote:
>>> What do you think about virtio-nvme+vhost-nvme?
>>
>> What would be the advantage over virtio-blk? Multiqueue is not supported
>> by QEMU but it's already supported by Linux (commit 6a27b656fc).
>
On Tue, 2015-12-01 at 11:59 -0500, Paolo Bonzini wrote:
> > What do you think about virtio-nvme+vhost-nvme?
>
> What would be the advantage over virtio-blk? Multiqueue is not supported
> by QEMU but it's already supported by Linux (commit 6a27b656fc).
I expect performance would be better.
Seems
> What do you think about virtio-nvme+vhost-nvme?
What would be the advantage over virtio-blk? Multiqueue is not supported
by QEMU but it's already supported by Linux (commit 6a27b656fc).
To me, the advantage of nvme is that it provides more than decent performance on
unmodified Windows guests,
On Tue, 2015-12-01 at 17:02 +0100, Paolo Bonzini wrote:
>
> On 01/12/2015 00:20, Ming Lin wrote:
> > qemu-nvme: 148MB/s
> > vhost-nvme + google-ext: 230MB/s
> > qemu-nvme + google-ext + eventfd: 294MB/s
> > virtio-scsi: 296MB/s
> > virtio-blk: 344MB/s
> >
> > "vhost-nvme + google-ext" didn't get
On 01/12/2015 00:20, Ming Lin wrote:
> qemu-nvme: 148MB/s
> vhost-nvme + google-ext: 230MB/s
> qemu-nvme + google-ext + eventfd: 294MB/s
> virtio-scsi: 296MB/s
> virtio-blk: 344MB/s
>
> "vhost-nvme + google-ext" didn't get good enough performance.
I'd expect it to be on par of qemu-nvme with io
On Mon, 2015-11-23 at 15:14 +0100, Paolo Bonzini wrote:
>
> On 23/11/2015 09:17, Ming Lin wrote:
> > On Sat, 2015-11-21 at 14:11 +0100, Paolo Bonzini wrote:
> >>
> >> On 20/11/2015 01:20, Ming Lin wrote:
> >>> One improvment could be to use google's NVMe vendor extension that
> >>> I send in anoth
On 25/11/2015 19:51, Ming Lin wrote:
> > Do you still have a blk_set_aio_context somewhere? I'm losing track of
> > the changes.
>
> No.
You'll need it. That's what causes your error.
> BTW, I'm not sure about qemu upstream policy.
> Do I need to first make the kernel side patch upstream?
>
On Wed, 2015-11-25 at 12:27 +0100, Paolo Bonzini wrote:
> Do you still have a blk_set_aio_context somewhere? I'm losing track of
> the changes.
No.
>
> In any case, I think using a separate I/O thread is a bit premature,
> except for benchmarking. In the meanwhile I think the best option is to
On 24/11/2015 20:25, Ming Lin wrote:
> On Tue, 2015-11-24 at 11:51 +0100, Paolo Bonzini wrote:
>>
>> On 24/11/2015 08:27, Ming Lin wrote:
>>> handle_notify (qemu/hw/block/dataplane/virtio-blk.c:126)
>>> aio_dispatch (qemu/aio-posix.c:329)
>>> aio_poll (qemu/aio-posix.c:474)
>>> iothread_run (qemu
On Tue, 2015-11-24 at 11:51 +0100, Paolo Bonzini wrote:
>
> On 24/11/2015 08:27, Ming Lin wrote:
> > handle_notify (qemu/hw/block/dataplane/virtio-blk.c:126)
> > aio_dispatch (qemu/aio-posix.c:329)
> > aio_poll (qemu/aio-posix.c:474)
> > iothread_run (qemu/iothread.c:45)
> > start_thread (pthread_
On 24/11/2015 08:27, Ming Lin wrote:
> handle_notify (qemu/hw/block/dataplane/virtio-blk.c:126)
> aio_dispatch (qemu/aio-posix.c:329)
> aio_poll (qemu/aio-posix.c:474)
> iothread_run (qemu/iothread.c:45)
> start_thread (pthread_create.c:312)
> /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)
>
> I th
On Mon, 2015-11-23 at 23:27 -0800, Ming Lin wrote:
> On Mon, 2015-11-23 at 15:14 +0100, Paolo Bonzini wrote:
> >
> > On 23/11/2015 09:17, Ming Lin wrote:
> > > On Sat, 2015-11-21 at 14:11 +0100, Paolo Bonzini wrote:
> > >>
> > >> On 20/11/2015 01:20, Ming Lin wrote:
> > >>> One improvment could be
On Mon, 2015-11-23 at 15:14 +0100, Paolo Bonzini wrote:
>
> On 23/11/2015 09:17, Ming Lin wrote:
> > On Sat, 2015-11-21 at 14:11 +0100, Paolo Bonzini wrote:
> >>
> >> On 20/11/2015 01:20, Ming Lin wrote:
> >>> One improvment could be to use google's NVMe vendor extension that
> >>> I send in anoth
On 23/11/2015 09:17, Ming Lin wrote:
> On Sat, 2015-11-21 at 14:11 +0100, Paolo Bonzini wrote:
>>
>> On 20/11/2015 01:20, Ming Lin wrote:
>>> One improvment could be to use google's NVMe vendor extension that
>>> I send in another thread, aslo here:
>>> https://git.kernel.org/cgit/linux/kernel/gi
On Sat, 2015-11-21 at 14:11 +0100, Paolo Bonzini wrote:
>
> On 20/11/2015 01:20, Ming Lin wrote:
> > One improvment could be to use google's NVMe vendor extension that
> > I send in another thread, aslo here:
> > https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-google-ext
>
On 20/11/2015 01:20, Ming Lin wrote:
> One improvment could be to use google's NVMe vendor extension that
> I send in another thread, aslo here:
> https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-google-ext
>
> Qemu side:
> http://www.minggr.net/cgit/cgit.cgi/qemu/log/?h=v
On Fri, 2015-11-20 at 06:16 +0100, Christoph Hellwig wrote:
> Thanks Ming,
>
> from a first quick view this looks great. I'll look over it in a bit
> more detail once I get a bit more time.
Thanks to CC Nic :-)
But funny, I double-checked bash history. I actually CCed Nic.
Don't know why it's l
Thanks Ming,
from a first quick view this looks great. I'll look over it in a bit
more detail once I get a bit more time.
Hi,
This is the first attempt to add a new qemu nvme backend using
in-kernel nvme target.
Most code are ported from qemu-nvme and also borrow code from
Hannes Reinecke's rts-megasas.
It's similar as vhost-scsi, but doesn't use virtio.
The advantage is guest can run unmodified NVMe driver.
So gue
19 matches
Mail list logo