On 2019/09/19 11:45, Hannes Reinecke wrote:
> From: Hannes Reinecke <[email protected]>
>
> A scheduler might be attached even for devices exposing more than
> one hardware queue, so the check for the number of hardware queue
> is pointless and should be removed.
>
> Signed-off-by: Hannes Reinecke <[email protected]>
> ---
> block/blk-mq.c | 6 +-----
> 1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 44ff3c1442a4..faab542e4836 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1931,7 +1931,6 @@ static void blk_add_rq_to_plug(struct blk_plug *plug,
> struct request *rq)
>
> static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
> {
> - const int is_sync = op_is_sync(bio->bi_opf);
> const int is_flush_fua = op_is_flush(bio->bi_opf);
> struct blk_mq_alloc_data data = { .flags = 0};
> struct request *rq;
> @@ -1977,7 +1976,7 @@ static blk_qc_t blk_mq_make_request(struct
> request_queue *q, struct bio *bio)
> /* bypass scheduler for flush rq */
> blk_insert_flush(rq);
> blk_mq_run_hw_queue(data.hctx, true);
> - } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs)) {
> + } else if (plug && q->mq_ops->commit_rqs) {
> /*
> * Use plugging if we have a ->commit_rqs() hook as well, as
> * we know the driver uses bd->last in a smart fashion.
> @@ -2020,9 +2019,6 @@ static blk_qc_t blk_mq_make_request(struct
> request_queue *q, struct bio *bio)
> blk_mq_try_issue_directly(data.hctx, same_queue_rq,
> &cookie);
> }
> - } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
> - !data.hctx->dispatch_busy)) {
> - blk_mq_try_issue_directly(data.hctx, rq, &cookie);
It may be worth mentioning that blk_mq_sched_insert_request() will do a direct
insert of the request using __blk_mq_insert_request(). But that insert is
slightly different from what blk_mq_try_issue_directly() does with
__blk_mq_issue_directly() as the request in that case is passed along to the
device using queue->mq_ops->queue_rq() while __blk_mq_insert_request() will put
the request in ctx->rq_lists[type].
This removes the optimized case !q->elevator && !data.hctx->dispatch_busy, but I
am not sure of the actual performance impact yet. We may want to patch
blk_mq_sched_insert_request() to handle that case.
> } else {
> blk_mq_sched_insert_request(rq, false, true, true);
> }
>
--
Damien Le Moal
Western Digital Research