So I think you can make the entire thing work with
rcu_note_context_switch().

If we have the sync thing do something like:


        for_each_task(t) {
                atomic_inc(&rcu_tasks);
                atomic_or(&t->rcu_attention, RCU_TASK);
                smp_mb__after_atomic();
                if (!t->on_rq) {
                        if (atomic_test_and_clear(&t->rcu_attention, RCU_TASK))
                                atomic_dec(&rcu_tasks);
                }
        }

        wait_event(&rcu_tasks_wq, !atomic_read(&rcu_tasks));


And then we have rcu_task_note_context_switch() (as called from
rcu_note_context_switch) do:


        /* we want actual context switches, ignore preemption */
        if (preempt_count() & PREEMPT_ACTIVE)
                return;

        /* if not marked for RCU attention, bail */
        if (!(atomic_read(&t->rcu_attention) & RCU_TASK))
                return;

        /* raced with sync_rcu_task(), all done */
        if (!atomic_test_and_clear(&t->rcu_attention, RCU_TASK))
                return;

        /* not the last.. */
        if (!atomic_dec_and_test(&rcu_tasks))
                return;

        wake_up(&rcu_task_rq);


The idea is to share rcu_attention with rcu_preempt, such that we only
touch a single 'extra' cacheline in case RCU doesn't need to pay
attention to this task.

Also, it would be good if we can manage to squeeze this variable in a
cacheline that's already touched by the schedule() so as not to incur
undue overhead.

And on that, you probably should change rcu_sched_rq() to read:

        this_cpu_inc(rcu_sched_data.passed_quiesce);

That avoids touching the per-cpu data offset.

And it would be very good if we could avoid the unconditional IRQ flag
fiddling in rcu_preempt_note_context_switch(), them expensive, this
looks entirely feasibly in the 'normal' case where
t->rcu_read_unlock_special doesn't have RCU_READ_UNLOCK_NEED_QS set.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to