On 01/16/2018 12:08 AM, Fam Zheng wrote:
This is a new protocol driver that exclusively opens a host NVMe
controller through VFIO. It achieves better latency than linux-aio by
completely bypassing host kernel vfs/block layer.
$rw-$bs-$iodepth linux-aio nvme://
----------------------------------------
randread-4k-1 10.5k 21.6k
randread-512k-1 745 1591
randwrite-4k-1 30.7k 37.0k
randwrite-512k-1 1945 1980
(unit: IOPS)
The driver also integrates with the polling mechanism of iothread.
This patch is co-authored by Paolo and me.
Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
Signed-off-by: Fam Zheng <f...@redhat.com>
Message-Id: <20180110091846.10699-4-f...@redhat.com>
---
Sorry for not noticing sooner, but
+static int64_t coroutine_fn nvme_co_get_block_status(BlockDriverState *bs,
+ int64_t sector_num,
+ int nb_sectors, int *pnum,
+ BlockDriverState **file)
+{
+ *pnum = nb_sectors;
+ *file = bs;
+
+ return BDRV_BLOCK_ALLOCATED | BDRV_BLOCK_OFFSET_VALID |
+ (sector_num << BDRV_SECTOR_BITS);
This is wrong. Drivers should only ever return BDRV_BLOCK_DATA (which
io.c then _adds_ BDRV_BLOCK_ALLOCATED to, as needed). I'll fix it up as
part of my byte-based block status series (v8 coming up soon).
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org