From: "Joel Fernandes (Google)" <j...@joelfernandes.org>

RCU tasks callbacks can take atleast 1 second before the callbacks are
executed. This happens even if the hold-out tasks enter their quiescent states
quickly. I noticed this when I was testing trampoline callback execution.

To test the trampoline freeing, I wrote a simple script:
cd /sys/kernel/debug/tracing/
echo '*:traceon' > set_ftrace_filter;
echo '!*:traceon' > set_ftrace_filter;

With this patch:
real    0m0.256s
user    0m0.000s
sys     0m0.226s

Without this patch:
real    0m1.313s
user    0m0.000s
sys     0m0.222s

That's a great than 5X speed up in performance. In order to accomplish
this, I am waiting very briefly for around 2 scheduler ticks worth of
time before entering the hold-out checking loop. The loop still
preserves its checking of held tasks every 1 second as before, incase
this first test doesn't succeed.

Cc: Steven Rostedt <rost...@goodmis.org>
Cc: Peter Zilstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Boqun Feng <boqun.f...@gmail.com>
Cc: Paul McKenney <paul...@linux.vnet.ibm.com>
Cc: byungchul.p...@lge.com
Cc: kernel-t...@android.com
Signed-off-by: Joel Fernandes (Google) <j...@joelfernandes.org>
---
 kernel/rcu/update.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 5783bdf86e5a..d221db4ab3f4 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -743,6 +743,12 @@ static int __noreturn rcu_tasks_kthread(void *arg)
                 */
                synchronize_srcu(&tasks_rcu_exit_srcu);
 
+               /*
+                * Wait a little bit incase held tasks are released
+                * during their next timer ticks.
+                */
+               schedule_timeout_interruptible(2);
+
                /*
                 * Each pass through the following loop scans the list
                 * of holdout tasks, removing any that are no longer
@@ -755,7 +761,6 @@ static int __noreturn rcu_tasks_kthread(void *arg)
                        int rtst;
                        struct task_struct *t1;
 
-                       schedule_timeout_interruptible(HZ);
                        rtst = READ_ONCE(rcu_task_stall_timeout);
                        needreport = rtst > 0 &&
                                     time_after(jiffies, lastreport + rtst);
@@ -768,6 +773,11 @@ static int __noreturn rcu_tasks_kthread(void *arg)
                                check_holdout_task(t, needreport, &firstreport);
                                cond_resched();
                        }
+
+                       if (list_empty(&rcu_tasks_holdouts))
+                               break;
+
+                       schedule_timeout_interruptible(HZ);
                }
 
                /*
-- 
2.17.0.441.gb46fe60e1d-goog

Reply via email to