Hello, On Wed, Feb 03, 2016 at 08:12:19PM +0100, Thomas Gleixner wrote: > > Signed-off-by: Tejun Heo <t...@kernel.org> > > Reported-by: Mike Galbraith <umgwanakikb...@gmail.com> > > Cc: Tang Chen <tangc...@cn.fujitsu.com> > > Cc: Rafael J. Wysocki <raf...@kernel.org> > > Cc: Len Brown <len.br...@intel.com> > > Cc: sta...@vger.kernel.org # v4.3+ > > 4.3+ ? Hasn't 874bbfe600a6 been backported to older stable kernels? > > Adding a 'Fixes: 874bbfe600a6 ...' tag is what you really want here.
Oops, you're right. Will add that once Mike confirms the fix. > > @@ -570,6 +570,16 @@ static struct pool_workqueue > > *unbound_pwq_by_node(struct workqueue_struct *wq, > > int node) > > { > > assert_rcu_or_wq_mutex_or_pool_mutex(wq); > > + > > + /* > > + * XXX: @node can be NUMA_NO_NODE if CPU goes offline while a > > + * delayed item is pending. The plan is to keep CPU -> NODE > > + * mapping valid and stable across CPU on/offlines. Once that > > + * happens, this workaround can be removed. > > So what happens if the complete node is offline? pool_workqueue lookup itself should be fine as dfl_pwq is assigned to all nodes by default. When the node comes back online, things can break currently because cpu to node mapping may change. That's what Tang has been working on. It's a bigger problem throughout the memory allocation path tho because there's no synchronization around cpu -> node mapping. Hopefully, the pending patchset can get through sooner than later. Thanks. -- tejun