Le 18/10/2016 à 09:10, Balbir Singh a écrit :
Michael Ellerman debugged an issue w.r.t workqueue changes
(see https://lkml.org/lkml/2016/10/17/352) down to the fact
that we don't setup our per cpu (cpu to node) binding early
enough (in setup_per_cpu_areas like x86 does).
This lead to a problem with workqueue changes where the
cpus seen by for_each_node() in workqueue_init_early() was
different from their binding seen later in
for_each_possible_cpu(cpu) {
node = cpu_to_node(cpu)
...
}
In setup_arch()->initmem_init() we have access to the binding
in numa_cpu_lookup_table()
This patch implements Michael's suggestion of setting up
the per cpu node binding inside of setup_per_cpu_areas()
I did not remove the original setting of these values
from smp_prepare_cpus(). I've also not setup per cpu
mem's via set_cpu_numa_mem() since zonelists are not
yet built by the time we do per cpu setup.
Reported-by: Michael Ellerman <m...@ellerman.id.au>
Signed-off-by: Balbir Singh <bsinghar...@gmail.com>
I see we still have this patch as "New" in patchwork.
Is it similar to commit ba4a648f12f4 ("powerpc/numa: Fix percpu
allocations to be NUMA aware")
Or is it something else ?
Thanks
Christophe
---
arch/powerpc/kernel/setup_64.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index c3e1290..842415a 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -625,6 +625,8 @@ void __init setup_per_cpu_areas(void)
for_each_possible_cpu(cpu) {
__per_cpu_offset[cpu] = delta + pcpu_unit_offsets[cpu];
paca[cpu].data_offset = __per_cpu_offset[cpu];
+
+ set_cpu_numa_node(cpu, numa_cpu_lookup_table[cpu]);
}
}
#endif