On Wed, Jul 12, 2017 at 10:14:48AM +0800, Fam Zheng wrote:
> On Mon, 07/10 15:55, Stefan Hajnoczi wrote:
> > On Wed, Jul 05, 2017 at 09:36:31PM +0800, Fam Zheng wrote:
> > > +static int nvme_co_prw(BlockDriverState *bs, uint64_t offset, uint64_t
> > > bytes,
> > > + QEMUIOVec
On Mon, 07/10 15:55, Stefan Hajnoczi wrote:
> On Wed, Jul 05, 2017 at 09:36:31PM +0800, Fam Zheng wrote:
> > +static bool nvme_identify(BlockDriverState *bs, int namespace, Error
> > **errp)
> > +{
> > +BDRVNVMeState *s = bs->opaque;
> > +uint8_t *resp;
> > +int r;
> > +uint64_t io
On Wed, Jul 05, 2017 at 09:36:31PM +0800, Fam Zheng wrote:
> +static bool nvme_identify(BlockDriverState *bs, int namespace, Error **errp)
> +{
> +BDRVNVMeState *s = bs->opaque;
> +uint8_t *resp;
> +int r;
> +uint64_t iova;
> +NvmeCmd cmd = {
> +.opcode = NVME_ADM_CMD_ID
On Wed, Jul 05, 2017 at 09:36:31PM +0800, Fam Zheng wrote:
> diff --git a/block/nvme-vfio.c b/block/nvme-vfio.c
> new file mode 100644
> index 000..f030a82
> --- /dev/null
> +++ b/block/nvme-vfio.c
> @@ -0,0 +1,703 @@
> +/*
> + * NVMe VFIO interface
As far as I can tell nothing in this file is
On Thu, 07/06 13:38, Keith Busch wrote:
> On Wed, Jul 05, 2017 at 09:36:31PM +0800, Fam Zheng wrote:
> > This is a new protocol driver that exclusively opens a host NVMe
> > controller through VFIO. It achieves better latency than linux-aio by
> > completely bypassing host kernel vfs/block layer.
>
On Wed, Jul 05, 2017 at 09:36:31PM +0800, Fam Zheng wrote:
> This is a new protocol driver that exclusively opens a host NVMe
> controller through VFIO. It achieves better latency than linux-aio by
> completely bypassing host kernel vfs/block layer.
>
> $rw-$bs-$iodepth linux-aio nvme://
This is a new protocol driver that exclusively opens a host NVMe
controller through VFIO. It achieves better latency than linux-aio by
completely bypassing host kernel vfs/block layer.
$rw-$bs-$iodepth linux-aio nvme://
randread-4k-1 82