On Tue, Oct 20, 2015 at 1:59 AM, Jeff Moyer <[email protected]> wrote:
> Ming Lei <[email protected]> writes:
>
>> On Mon, Oct 19, 2015 at 11:38 PM, Jeff Moyer <[email protected]> wrote:
>>> Ming Lei <[email protected]> writes:
>>>
>>>> The trace point is for tracing plug event of each request
>>>> queue instead of each task, so we should check the request
>>>> count in the plug list from current queue instead of
>>>> current task.
>>>
>>> If blk_queue_nomerges returns true, request_count won't be updated, so
>>> you've essentially broken plugging for those queues.
>>
>> Yes, you are right, and blk_queue_bio() has the same problem too,
>> and looks automatic flush plug can't work for nomerges queue at all.
>
> Hi Ming,
>
> Something like this would fix it.  Feel free to add it into your patch
> set if you're going to do another spin.  Otherwise I'll just send it off
> separately.

Looks fine for me, will add it in V2, thanks!


>
> -Jeff
>
> From: Jeff Moyer <[email protected]>
> Subject: block: fix plug list flushing for nomerge queues
>
> Request queues with merging disabled will not flush the plug list after
> BLK_MAX_REQUEST_COUNT requests have been queued, since the code relies
> on blk_attempt_plug_merge to compute the request_count.  Fix this by
> computing the number of queued requests even for nomerge queues.
>
> Signed-off-by: Jeff Moyer <[email protected]>
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 2eb722d..f0ae087 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1594,6 +1594,30 @@ out:
>         return ret;
>  }
>
> +unsigned int blk_plug_queued_count(struct request_queue *q)
> +{
> +       struct blk_plug *plug;
> +       struct request *rq;
> +       struct list_head *plug_list;
> +       unsigned int ret = 0;
> +
> +       plug = current->plug;
> +       if (!plug)
> +               goto out;
> +
> +       if (q->mq_ops)
> +               plug_list = &plug->mq_list;
> +       else
> +               plug_list = &plug->list;
> +
> +       list_for_each_entry(rq, plug_list, queuelist) {
> +               if (rq->q == q)
> +                       ret++;
> +       }
> +out:
> +       return ret;
> +}
> +
>  void init_request_from_bio(struct request *req, struct bio *bio)
>  {
>         req->cmd_type = REQ_TYPE_FS;
> @@ -1641,9 +1665,11 @@ static void blk_queue_bio(struct request_queue *q, 
> struct bio *bio)
>          * Check if we can merge with the plugged list before grabbing
>          * any locks.
>          */
> -       if (!blk_queue_nomerges(q) &&
> -           blk_attempt_plug_merge(q, bio, &request_count, NULL))
> -               return;
> +       if (!blk_queue_nomerges(q)) {
> +               if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
> +                       return;
> +       } else
> +               request_count = blk_plug_queued_count(q);
>
>         spin_lock_irq(q->queue_lock);
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 7785ae9..604a1f3 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1267,9 +1267,12 @@ static void blk_mq_make_request(struct request_queue 
> *q, struct bio *bio)
>
>         blk_queue_split(q, &bio, q->bio_split);
>
> -       if (!is_flush_fua && !blk_queue_nomerges(q) &&
> -           blk_attempt_plug_merge(q, bio, &request_count, &same_queue_rq))
> -               return;
> +       if (!is_flush_fua && !blk_queue_nomerges(q)) {
> +               if (blk_attempt_plug_merge(q, bio, &request_count,
> +                                          &same_queue_rq))
> +                       return;
> +       } else
> +               request_count = blk_plug_queued_count(q);
>
>         rq = blk_mq_map_request(q, bio, &data);
>         if (unlikely(!rq))
> diff --git a/block/blk.h b/block/blk.h
> index 98614ad..aa27d02 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -86,6 +86,7 @@ bool bio_attempt_back_merge(struct request_queue *q, struct 
> request *req,
>  bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
>                             unsigned int *request_count,
>                             struct request **same_queue_rq);
> +unsigned int blk_plug_queued_count(struct request_queue *q);
>
>  void blk_account_io_start(struct request *req, bool new_io);
>  void blk_account_io_completion(struct request *req, unsigned int bytes);
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to