On Sat, 2017-09-30 at 14:12 +0800, Ming Lei wrote:
> +void blk_set_preempt_only(struct request_queue *q, bool preempt_only)
> +{
> +     blk_mq_freeze_queue(q);
> +     if (preempt_only)
> +             queue_flag_set_unlocked(QUEUE_FLAG_PREEMPT_ONLY, q);
> +     else
> +             queue_flag_clear_unlocked(QUEUE_FLAG_PREEMPT_ONLY, q);
> +     blk_mq_unfreeze_queue(q);
> +}
> +EXPORT_SYMBOL(blk_set_preempt_only);
> +
>  /**
>   * __blk_run_queue_uncond - run a queue whether or not it has been stopped
>   * @q:       The queue to run
> @@ -771,9 +782,18 @@ int blk_queue_enter(struct request_queue *q, unsigned 
> flags)
>       while (true) {
>               int ret;
>  
> +             /*
> +              * preempt_only flag has to be set after queue is frozen,
> +              * so it can be checked here lockless and safely
> +              */
> +             if (blk_queue_preempt_only(q)) {
> +                     if (!(flags & BLK_REQ_PREEMPT))
> +                             goto slow_path;
> +             }
> +
>               if (percpu_ref_tryget_live(&q->q_usage_counter))
>                       return 0;

Sorry but I don't think that it is possible with these changes to prevent
that a non-preempt request gets allocated after a (SCSI) queue has been
quiesced. If the CPU that calls blk_queue_enter() observes the set of the
PREEMPT_ONLY flag after the queue has been unfrozen and after the SCSI
device state has been changed into QUIESCED then blk_queue_enter() can
succeed for a non-preempt request. I think this is exactly the scenario
we want to avoid. This is why a synchronize_rcu() call is present in my
patch before the queue is unfrozen and also why in my patch the
percpu_ref_tryget_live() call occurs before the test of the PREEMPT_ONLY
flag.

Bart.

Reply via email to