Hi,

As recent discussion, especially suggested by Christoph, this patchset
implements per-distpatch_queue flush machinery, so that:

        - current init_request and exit_request callbacks can
        cover flush request too, then the buggy copying way of
        initializing flush request's pdu can be fixed

        - flushing performance gets improved in case of multi hw-queue

About 70% throughput improvement is observed in sync write
over multi dispatch-queue virtio-blk, see details in commit log
of patch 10/10.

This patchset can be pulled from below tree too:

        git://kernel.ubuntu.com/ming/linux.git v3.17-block-dev_v3

V3:
        - don't return failure code from blk_alloc_flush_queue() to
        avoid freeing invalid buffer in case of allocation failure
        - remove blk_init_flush() and blk_exit_flush()
        - remove unnecessary WARN_ON() from blk_alloc_flush_queue()

V2:
        - refactor blk_mq_init_hw_queues() and its pair, also it is a fix
        on failure path, so that conversion to per-queue flush becomes simple.
        - allocate/initialize flush queue in blk_mq_init_hw_queues()
        - add sync write tests on virtio-blk which is backed by SSD image

V1:
        - commit log typo fix
        - introduce blk_alloc_flush_queue() and its pair earlier, so
        that patch 5 and 8 become easier for review

 block/blk-core.c       |   12 ++--
 block/blk-flush.c      |  129 +++++++++++++++++++++++++-------------
 block/blk-mq.c         |  160 ++++++++++++++++++++++++++++++------------------
 block/blk-mq.h         |    1 -
 block/blk-sysfs.c      |    4 +-
 block/blk.h            |   35 ++++++++++-
 include/linux/blk-mq.h |    2 +
 include/linux/blkdev.h |   10 +--
 8 files changed, 230 insertions(+), 123 deletions(-)



Thanks,
--
Ming Lei


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to