On Fri, Aug 08, 2014 at 09:13:26PM +0200, Peter Zijlstra wrote:
> 
> 
> So I think you can make the entire thing work with
> rcu_note_context_switch().
> 
> If we have the sync thing do something like:
> 
> 
>       for_each_task(t) {
>               atomic_inc(&rcu_tasks);
>               atomic_or(&t->rcu_attention, RCU_TASK);
>               smp_mb__after_atomic();
>               if (!t->on_rq) {
>                       if (atomic_test_and_clear(&t->rcu_attention, RCU_TASK))
>                               atomic_dec(&rcu_tasks);
>               }
>       }
> 
>       wait_event(&rcu_tasks_wq, !atomic_read(&rcu_tasks));
> 
> 
> And then we have rcu_task_note_context_switch() (as called from
> rcu_note_context_switch) do:
> 
> 
>       /* we want actual context switches, ignore preemption */
>       if (preempt_count() & PREEMPT_ACTIVE)
>               return;
> 
>       /* if not marked for RCU attention, bail */
>       if (!(atomic_read(&t->rcu_attention) & RCU_TASK))
>               return;
> 
>       /* raced with sync_rcu_task(), all done */
>       if (!atomic_test_and_clear(&t->rcu_attention, RCU_TASK))
>               return;
> 
>       /* not the last.. */
>       if (!atomic_dec_and_test(&rcu_tasks))
>               return;
> 
>       wake_up(&rcu_task_rq);
> 
> 
> The idea is to share rcu_attention with rcu_preempt, such that we only
> touch a single 'extra' cacheline in case RCU doesn't need to pay
> attention to this task.
> 
> Also, it would be good if we can manage to squeeze this variable in a
> cacheline that's already touched by the schedule() so as not to incur
> undue overhead.

This approach does not get me the idle tasks and the NO_HZ_FULL usermode
tasks.  I am pretty sure that I am stuck polling in those cases, so I
might as well poll.

> And on that, you probably should change rcu_sched_rq() to read:
> 
>       this_cpu_inc(rcu_sched_data.passed_quiesce);
> 
> That avoids touching the per-cpu data offset.

Hmmm...  Interrupts are disabled, so no need to further disable
interrupts.  Storing 1 works fine, no need to increment.  If I followed
the twisty per_cpu passages correctly, my guess is that you would like
me to do something like this:

        __this_cpu_write(rcu_sched_data.passed_quiesce, 1);

Does that work?

> And it would be very good if we could avoid the unconditional IRQ flag
> fiddling in rcu_preempt_note_context_switch(), them expensive, this
> looks entirely feasibly in the 'normal' case where
> t->rcu_read_unlock_special doesn't have RCU_READ_UNLOCK_NEED_QS set.

Agreed, but sometimes RCU_READ_UNLOCK_NEED_QS is set.

That said, I should probably revisit RCU_READ_UNLOCK_NEED_QS.  A lot has
changed since I wrote that code.

                                                        Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to