We assign cpu id into work struct's data field in __queue_delayed_work_on().
In current implementation, when work is come in first time,
current running cpu id is assigned.
If we do __queue_delayed_work_on() with CPU A on CPU B,
__queue_work() invoked in delayed_work_timer_fn() go into
the following sub-optimal path in case of WQ_NON_REENTRANT.

        gcwq = get_gcwq(cpu);
        if (wq->flags & WQ_NON_REENTRANT &&
                (last_gcwq = get_work_gcwq(work)) && last_gcwq != gcwq) {

Change lcpu to @cpu and rechange lcpu to local cpu if lcpu is WORK_CPU_UNBOUND.
It is sufficient to prevent to go into sub-optimal path.

Signed-off-by: Joonsoo Kim <js1...@gmail.com>

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c29f2dc..32c4f79 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1356,9 +1356,16 @@ static void __queue_delayed_work(int cpu, struct 
workqueue_struct *wq,
        if (!(wq->flags & WQ_UNBOUND)) {
                struct global_cwq *gcwq = get_work_gcwq(work);
 
-               if (gcwq && gcwq->cpu != WORK_CPU_UNBOUND)
+               /*
+                * If we cannot get gcwq from work directly, we should
+                * deliberately select last cpu not to go into sub-optimal
+                * path of reentrance detection for delayed work. In this case,
+                * we assign requested cpu to lcpu except WORK_CPU_UNBOUND
+                */
+               lcpu = cpu;
+               if (gcwq)
                        lcpu = gcwq->cpu;
-               else
+               if (lcpu == WORK_CPU_UNBOUND)
                        lcpu = raw_smp_processor_id();
        } else {
                lcpu = WORK_CPU_UNBOUND;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to