On Wed, Apr 16, 2014 at 07:21:48AM +0200, Peter Zijlstra wrote:
> On Tue, Apr 15, 2014 at 08:54:19PM -0700, Paul E. McKenney wrote:
> > But falling back on the old ways of doing this at least looks a bit
> > nicer:
> > 
> >     static inline bool rcu_should_resched(void)
> >     {
> >             int t;
> >             int *tp = &per_cpu(rcu_cond_resched_count, 
> > raw_smp_processor_id());
> > 
> >             t = ACCESS_ONCE(*tp) + 1;
> >             if (t < RCU_COND_RESCHED_LIM) {
> 
> <here>
> 
> >                     ACCESS_ONCE(*tp) = t;
> >                     return false;
> >             }
> >             return true;
> >     }
> > 
> > Other thoughts?
> 
> Still broken, if A starts out on CPU1, gets migrated to CPU0 at <here>,
> then B starts the same on CPU1. It is possible for both CPU0 and CPU1 to
> write a different value into your rcu_cond_resched_count.

That is actually OK.  The values written are guaranteed to be between
zero and RCU_COND_RESCHED_LIM-1.  In theory, yes, rcu_should_resched()
could end up failing due to a horribly unlucky sequence of preemptions,
but the probability is -way- lower than that of hardware failure.

However...

> You really want to disable preemption around there. The proper old way
> would've been get_cpu_var()/put_cpu_var().

If you are OK with unconditional disabling of preemption at this point,
that would avoid worrying about probabilities and would be quite a bit
simpler.

So unconditional preempt_disable()/preempt_enable() it is.

                                                        Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to