On Thu, Oct 15, 2020 at 09:15:01AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 15, 2020 at 11:49:26AM +0200, Peter Zijlstra wrote:
> > On Wed, Oct 14, 2020 at 08:41:28PM -0700, Paul E. McKenney wrote:

[ . . . ]

> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -1764,8 +1764,7 @@ static bool rcu_gp_init(void)
> >             smp_mb(); // Pair with barriers used when updating ->ofl_seq to 
> > odd values.
> >             firstseq = READ_ONCE(rnp->ofl_seq);
> >             if (firstseq & 0x1)
> > -                   while (firstseq == smp_load_acquire(&rnp->ofl_seq))
> > -                           schedule_timeout_idle(1);  // Can't wake unless 
> > RCU is watching.
> > +                   smp_cond_load_relaxed(&rnp->ofl_seq, VAL == firstseq);
> >             smp_mb(); // Pair with barriers used when updating ->ofl_seq to 
> > even values.
> >             raw_spin_lock(&rcu_state.ofl_lock);
> >             raw_spin_lock_irq_rcu_node(rnp);
> 
> This would work, and would be absolutely necessary if grace periods
> took only (say) 500 nanoseconds to complete.  But given that they take
> multiple milliseconds at best, and given that this race is extremely
> unlikely, and given the heavy use of virtualization, I have to stick
> with the schedule_timeout_idle().
> 
> In fact, I have on my list to force this race to happen on the grounds
> that if it ain't tested, it don't work...

And it only too about 1000 seconds of TREE03 to make this happen, so we
should be good just relying on rcutorture.  ;-)

                                                        Thanx, Paul

Reply via email to