Re: [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device

2016-12-29 Thread Fam Zheng
On Thu, 12/29 04:09, Tian, Kevin wrote: > is it a tradeoff between performance (better than linux-aio) and composability > (snapshot and live migration which not supported by direct passthrough)? Yes. Fam

Re: [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device

2016-12-28 Thread Tian, Kevin
> From: Fam Zheng > Sent: Wednesday, December 21, 2016 12:32 AM > > This series adds a new protocol driver that is intended to achieve about 20% > better performance for latency bound workloads (i.e. synchronous I/O) than > linux-aio when guest is exclusively accessing a NVMe device, by talking to

Re: [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device

2016-12-20 Thread Fam Zheng
On Tue, 12/20 15:04, no-re...@patchew.org wrote: > ERROR: that open brace { should be on the previous line > #287: FILE: util/vfio-helpers.c:214: > +struct vfio_group_status group_status = > +{ .argsz = sizeof(group_status) }; Hmm, it may indeed look better. > ERROR: Use of volatile is us

Re: [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device

2016-12-20 Thread no-reply
Hi, Your series failed automatic build test. Please find the testing commands and their output below. If you have docker installed, you can probably reproduce it locally. Type: series Message-id: 20161220163139.12016-1-f...@redhat.com Subject: [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block

Re: [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device

2016-12-20 Thread no-reply
Hi, Your series seems to have some coding style problems. See output below for more information: Subject: [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device Message-id: 20161220163139.12016-1-f...@redhat.com Type: series === TEST SCRIPT BEGIN === #!/bin/bash BASE=base n=1

[Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device

2016-12-20 Thread Fam Zheng
This series adds a new protocol driver that is intended to achieve about 20% better performance for latency bound workloads (i.e. synchronous I/O) than linux-aio when guest is exclusively accessing a NVMe device, by talking to the device directly instead of through kernel file system layers and its