On Fri, 20 Sep 2019, Andrea Vai wrote:

> Il giorno gio, 19/09/2019 alle 14.14 +0000, Damien Le Moal ha scritto:
> > On 2019/09/19 16:01, Alan Stern wrote:
> > [...]
> > > No doubt Andrea will be happy to test your fix when it's ready.
> 
> Yes, of course.
> 
> > 
> > Hannes posted an RFC series:
> > 
> > https://www.spinics.net/lists/linux-scsi/msg133848.html
> > 
> > Andrea can try it.
> 
> Ok, but I would need some instructions please, because I am not able
> to understand how to "try it". Sorry for that.

I have attached the two patches to this email.  You should start with a 
recent kernel source tree and apply the patches by doing:

        git apply patch1 patch2

or something similar.  Then build a kernel from the new source code and 
test it.

Ultimately, if nobody can find a way to restore the sequential I/O 
behavior we had prior to commit f664a3cc17b7, that commit may have to 
be reverted.

Alan Stern
From: Hannes Reinecke <h...@suse.com>

When blk_mq_request_issue_directly() returns BLK_STS_RESOURCE we
need to requeue the I/O, but adding it to the global request list
will mess up with the passed-in request list. So re-add the request
to the original list and leave it to the caller to handle situations
where the list wasn't completely emptied.

Signed-off-by: Hannes Reinecke <h...@suse.com>
---
 block/blk-mq.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index b038ec680e84..44ff3c1442a4 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1899,8 +1899,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx 
*hctx,
                if (ret != BLK_STS_OK) {
                        if (ret == BLK_STS_RESOURCE ||
                                        ret == BLK_STS_DEV_RESOURCE) {
-                               blk_mq_request_bypass_insert(rq,
-                                                       list_empty(list));
+                               list_add(list, &rq->queuelist);
                                break;
                        }
                        blk_mq_end_request(rq, ret);
-- 
2.16.4
From: Hannes Reinecke <h...@suse.com>

A scheduler might be attached even for devices exposing more than
one hardware queue, so the check for the number of hardware queue
is pointless and should be removed.

Signed-off-by: Hannes Reinecke <h...@suse.com>
---
 block/blk-mq.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 44ff3c1442a4..faab542e4836 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1931,7 +1931,6 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, 
struct request *rq)
 
 static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 {
-       const int is_sync = op_is_sync(bio->bi_opf);
        const int is_flush_fua = op_is_flush(bio->bi_opf);
        struct blk_mq_alloc_data data = { .flags = 0};
        struct request *rq;
@@ -1977,7 +1976,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue 
*q, struct bio *bio)
                /* bypass scheduler for flush rq */
                blk_insert_flush(rq);
                blk_mq_run_hw_queue(data.hctx, true);
-       } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs)) {
+       } else if (plug && q->mq_ops->commit_rqs) {
                /*
                 * Use plugging if we have a ->commit_rqs() hook as well, as
                 * we know the driver uses bd->last in a smart fashion.
@@ -2020,9 +2019,6 @@ static blk_qc_t blk_mq_make_request(struct request_queue 
*q, struct bio *bio)
                        blk_mq_try_issue_directly(data.hctx, same_queue_rq,
                                        &cookie);
                }
-       } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
-                       !data.hctx->dispatch_busy)) {
-               blk_mq_try_issue_directly(data.hctx, rq, &cookie);
        } else {
                blk_mq_sched_insert_request(rq, false, true, true);
        }
-- 
2.16.4

Reply via email to