Hi Srikar, Thanks for figuring this out.
Srikar Dronamraju <sri...@linux.vnet.ibm.com> writes: > > Some of the per-CPU masks use cpu_cpu_mask as a filter to limit the search > for related CPUs. On a dlpar add of a CPU, update cpu_cpu_mask before > updating the per-CPU masks. This will ensure the cpu_cpu_mask is updated > correctly before its used in setting the masks. Setting the numa_node will > ensure that when cpu_cpu_mask() gets called, the correct node number is > used. This code movement helped fix the above call trace. > > > diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c > index 5a4d59a1070d..1a99d75679a8 100644 > --- a/arch/powerpc/kernel/smp.c > +++ b/arch/powerpc/kernel/smp.c > @@ -1521,6 +1521,9 @@ void start_secondary(void *unused) > > vdso_getcpu_init(); > #endif > + set_numa_node(numa_cpu_lookup_table[cpu]); > + set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])); > + > /* Update topology CPU masks */ > add_cpu_to_masks(cpu); > > @@ -1539,9 +1542,6 @@ void start_secondary(void *unused) > shared_caches = true; > } > > - set_numa_node(numa_cpu_lookup_table[cpu]); > - set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])); > - Regardless of your change: at boot time, this set of calls to set_numa_node() and set_numa_mem() is redundant, right? Because smp_prepare_cpus() has: for_each_possible_cpu(cpu) { ... if (cpu_present(cpu)) { set_cpu_numa_node(cpu, numa_cpu_lookup_table[cpu]); set_cpu_numa_mem(cpu, local_memory_node(numa_cpu_lookup_table[cpu])); } I would rather that, when onlining a CPU that happens to have been dynamically added after boot, we enter start_secondary() with conditions equivalent to those at boot time. Or as close to that as is practical. So I'd suggest that pseries_add_processor() be made to update these things when the CPUs are marked present, before onlining them.