On Wed, Oct 07, 2015 at 04:26:27PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 06, 2015 at 09:29:37AM -0700, Paul E. McKenney wrote:
> >  void rcu_sched_qs(void)
> >  {
> > +   unsigned long flags;
> > +
> >     if (__this_cpu_read(rcu_sched_data.cpu_no_qs.s)) {
> >             trace_rcu_grace_period(TPS("rcu_sched"),
> >                                    __this_cpu_read(rcu_sched_data.gpnum),
> >                                    TPS("cpuqs"));
> >             __this_cpu_write(rcu_sched_data.cpu_no_qs.b.norm, false);
> > +           if (!__this_cpu_read(rcu_sched_data.cpu_no_qs.b.exp))
> > +                   return;
> > +           local_irq_save(flags);
> >             if (__this_cpu_read(rcu_sched_data.cpu_no_qs.b.exp)) {
> >                     __this_cpu_write(rcu_sched_data.cpu_no_qs.b.exp, false);
> >                     rcu_report_exp_rdp(&rcu_sched_state,
> >                                        this_cpu_ptr(&rcu_sched_data),
> >                                        true);
> >             }
> > +           local_irq_restore(flags);
> >     }
> >  }
> 
> *sigh*.. still rare I suppose, but should we look at doing something
> like this?

Indeed, that approach looks better than moving rcu_note_context_switch(),
which probably results in deadlocks.  I will update my patch accordingly.

                                                        Thanx, Paul

> ---
>  kernel/sched/core.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index fe819298c220..3d830c3491c4 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3050,7 +3050,6 @@ static void __sched __schedule(void)
> 
>       cpu = smp_processor_id();
>       rq = cpu_rq(cpu);
> -     rcu_note_context_switch();
>       prev = rq->curr;
> 
>       schedule_debug(prev);
> @@ -3058,13 +3057,16 @@ static void __sched __schedule(void)
>       if (sched_feat(HRTICK))
>               hrtick_clear(rq);
> 
> +     local_irq_disable();
> +     rcu_note_context_switch();
> +
>       /*
>        * Make sure that signal_pending_state()->signal_pending() below
>        * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
>        * done by the caller to avoid the race with signal_wake_up().
>        */
>       smp_mb__before_spinlock();
> -     raw_spin_lock_irq(&rq->lock);
> +     raw_spin_lock(&rq->lock);
>       lockdep_pin_lock(&rq->lock);
> 
>       rq->clock_skip_update <<= 1; /* promote REQ to ACT */
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to