If allocated earlier and the search fails, then cpumask need to be freed. However cpu_l1_cache_map can be allocated after we search thread group.
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org> Cc: LKML <linux-ker...@vger.kernel.org> Cc: Michael Ellerman <m...@ellerman.id.au> Cc: Nicholas Piggin <npig...@gmail.com> Cc: Anton Blanchard <an...@ozlabs.org> Cc: Oliver O'Halloran <ooh...@gmail.com> Cc: Nathan Lynch <nath...@linux.ibm.com> Cc: Michael Neuling <mi...@neuling.org> Cc: Gautham R Shenoy <e...@linux.vnet.ibm.com> Cc: Ingo Molnar <mi...@kernel.org> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Valentin Schneider <valentin.schnei...@arm.com> Cc: Jordan Niethe <jniet...@gmail.com> Reviewed-by: Gautham R. Shenoy <e...@linux.vnet.ibm.com> Signed-off-by: Srikar Dronamraju <sri...@linux.vnet.ibm.com> --- arch/powerpc/kernel/smp.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index cbca4a8c3314..7d8d44cbab11 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -797,10 +797,6 @@ static int init_cpu_l1_cache_map(int cpu) if (err) goto out; - zalloc_cpumask_var_node(&per_cpu(cpu_l1_cache_map, cpu), - GFP_KERNEL, - cpu_to_node(cpu)); - cpu_group_start = get_cpu_thread_group_start(cpu, &tg); if (unlikely(cpu_group_start == -1)) { @@ -809,6 +805,9 @@ static int init_cpu_l1_cache_map(int cpu) goto out; } + zalloc_cpumask_var_node(&per_cpu(cpu_l1_cache_map, cpu), + GFP_KERNEL, cpu_to_node(cpu)); + for (i = first_thread; i < first_thread + threads_per_core; i++) { int i_group_start = get_cpu_thread_group_start(i, &tg); -- 2.18.2