Hey, again. On Fri, Feb 01, 2013 at 02:41:36AM +0800, Lai Jiangshan wrote: > Every time, when we need to find the executing worker from work, > we need 2 steps: > find the pool from the idr by pool id. > find the worker from the hash table of the pool. > > Now we merge them as one step: find the worker directly from the idr by > worker gwid. > (lock_pool_executing_work(). If the work is onq, we still use hash table.) > > It makes the code more straightforward. > > In future, we may add percpu worker gwid <--> worker mapping cache when > needed. > > And we are planing to add non-std worker_pool, we still don't know how to > implement worker_pool_by_id() for non-std worker_pool, this patch solves it. > > This patch slows down the very-slow-path destroy_worker(), if it is blamed, > we will move the synchronize_rcu() out. > > This patch adds a small overhead(rcu_read_[un]lock) in fast path > lock_pool_executing_work(). if it is blamed, we will use rcu_sched instead. > (local_irq_disable() implies rcu_read_lock_sched(), so we can remove > this overhead.)
So, I've been looking at the last six patches and am not really sure whether I want this. At least not in the current form. We end up locking, unlocking and then locking again the same thing in hot path. Having a proper locking API is all nice and good *if* it actually can support use case at hand. The proposed one can't. Also, if worker is RCU protected then pool should be too (otherwise you can't do worker->pool->lock), so it doesn't really simplify that aspect either. I do like work items pointing back to workers instead of pools, so I think I'll try that differently. Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/