because old busy_worker_rebind_fn() have to wait until all idle worker finish. so we have to use two flags WORKER_UNBOUND and WORKER_REBIND to avoid prematurely clear all NOT_RUNNING bit when highly frequent offline/online.
but current code don't need to wait idle workers nor release gcwq->lock when it is doing rebind in rebind_workers(), so we don't need to use two flags, just one is enough. remove WORKER_REBIND from busy rebinding. Signed-off-by: Lai Jiangshan <la...@cn.fujitsu.com> --- kernel/workqueue.c | 8 +------- 1 files changed, 1 insertions(+), 7 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index e8b28d6..4b1ff46 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -1641,7 +1641,7 @@ static void busy_worker_rebind_fn(struct work_struct *work) struct global_cwq *gcwq = worker->pool->gcwq; if (worker_maybe_bind_and_lock(worker)) - worker_clr_flags(worker, WORKER_REBIND); + worker_clr_flags(worker, WORKER_UNBOUND); spin_unlock_irq(&gcwq->lock); } @@ -1696,15 +1696,9 @@ static void rebind_workers(struct global_cwq *gcwq) /* rebind busy workers */ for_each_busy_worker(worker, i, pos, gcwq) { - unsigned long worker_flags = worker->flags; struct work_struct *rebind_work = &worker->rebind_work; struct workqueue_struct *wq; - /* morph UNBOUND to REBIND atomically */ - worker_flags &= ~WORKER_UNBOUND; - worker_flags |= WORKER_REBIND; - ACCESS_ONCE(worker->flags) = worker_flags; - if (test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(rebind_work))) continue; -- 1.7.4.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/