In order to consolidate and optimize generic softirq mask accesses, we first need to convert architectures to use per-cpu operations when possible.
Signed-off-by: Frederic Weisbecker <frede...@kernel.org> Cc: Thomas Gleixner <t...@linutronix.de> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Ingo Molnar <mi...@kernel.org> Cc: Sebastian Andrzej Siewior <bige...@linutronix.de> Cc: David S. Miller <da...@davemloft.net> Cc: Benjamin Herrenschmidt <b...@kernel.crashing.org> Cc: Paul Mackerras <pau...@samba.org> Cc: Michael Ellerman <m...@ellerman.id.au> Cc: James E.J. Bottomley <j...@parisc-linux.org> Cc: Helge Deller <del...@gmx.de> Cc: Tony Luck <tony.l...@intel.com> Cc: Fenghua Yu <fenghua...@intel.com> Cc: Martin Schwidefsky <schwidef...@de.ibm.com> Cc: Heiko Carstens <heiko.carst...@de.ibm.com> Cc: Yoshinori Sato <ys...@users.sourceforge.jp> Cc: Rich Felker <dal...@libc.org> --- arch/sparc/include/asm/hardirq_64.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/sparc/include/asm/hardirq_64.h b/arch/sparc/include/asm/hardirq_64.h index f565402..6aba904 100644 --- a/arch/sparc/include/asm/hardirq_64.h +++ b/arch/sparc/include/asm/hardirq_64.h @@ -11,7 +11,7 @@ #define __ARCH_IRQ_STAT #define local_softirq_pending() \ - (local_cpu_data().__softirq_pending) + (*this_cpu_ptr(&__cpu_data.__softirq_pending)) void ack_bad_irq(unsigned int irq); -- 2.7.4