On Sun, Jan 17, 2021 at 12:14:34AM +0800, Lai Jiangshan wrote: > BP: AP: worker: > cpus_write_lock() > bringup_cpu() work_item_func() > bringup_wait_for_ap get_online_cpus() > kthread_park(worker)
Thanks, pictures are easier. Agreed, that a problem. I've also found another problem I think. rescuer_thread becomes part of for_each_pool_worker() between worker_attach_to_pool() and worker_detach_from_pool(), so it would try and do kthread_park() on rescuer, when things align. And rescuer_thread() doesn't have a kthread_parkme(). And we already rely on this 'ugly' thing of first doing kthread_set_per_cpu() and fixing up the affinity later for the rescuer. Let me restart the SRCU-P testing with the below delta applied. --- kernel/workqueue.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 1db769b116a1..894bb885b40b 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -2368,7 +2368,6 @@ static int worker_thread(void *__worker) /* tell the scheduler that this is a workqueue worker */ set_pf_worker(true); woke_up: - kthread_parkme(); raw_spin_lock_irq(&pool->lock); /* am I supposed to die? */ @@ -2426,7 +2425,7 @@ static int worker_thread(void *__worker) move_linked_works(work, &worker->scheduled, NULL); process_scheduled_works(worker); } - } while (keep_working(pool) && !kthread_should_park()); + } while (keep_working(pool)); worker_set_flags(worker, WORKER_PREP); sleep: @@ -2438,12 +2437,9 @@ static int worker_thread(void *__worker) * event. */ worker_enter_idle(worker); - set_current_state(TASK_IDLE); + __set_current_state(TASK_IDLE); raw_spin_unlock_irq(&pool->lock); - - if (!kthread_should_park()) - schedule(); - + schedule(); goto woke_up; } @@ -4979,9 +4975,9 @@ static void rebind_workers(struct worker_pool *pool) * from CPU_ONLINE, the following shouldn't fail. */ for_each_pool_worker(worker, pool) { - WARN_ON_ONCE(kthread_park(worker->task) < 0); kthread_set_per_cpu(worker->task, pool->cpu); - kthread_unpark(worker->task); + WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, + pool->attrs->cpumask) < 0); } raw_spin_lock_irq(&pool->lock);