On Wed, 20 Dec 2017 18:53:24 +0100 Gabriel Paubert <paub...@iram.es> wrote:
> On Thu, Dec 21, 2017 at 12:52:01AM +1000, Nicholas Piggin wrote: > > Shifted left by 16 bits, so the low 16 bits of r14 remain available. > > This allows per-cpu pointers to be dereferenced with a single extra > > shift whereas previously it was a load and add. > > --- > > arch/powerpc/include/asm/paca.h | 5 +++++ > > arch/powerpc/include/asm/percpu.h | 2 +- > > arch/powerpc/kernel/entry_64.S | 5 ----- > > arch/powerpc/kernel/head_64.S | 5 +---- > > arch/powerpc/kernel/setup_64.c | 11 +++++++++-- > > 5 files changed, 16 insertions(+), 12 deletions(-) > > > > diff --git a/arch/powerpc/include/asm/paca.h > > b/arch/powerpc/include/asm/paca.h > > index cd6a9a010895..4dd4ac69e84f 100644 > > --- a/arch/powerpc/include/asm/paca.h > > +++ b/arch/powerpc/include/asm/paca.h > > @@ -35,6 +35,11 @@ > > > > register struct paca_struct *local_paca asm("r13"); > > #ifdef CONFIG_PPC_BOOK3S > > +/* > > + * The top 32-bits of r14 is used as the per-cpu offset, shifted by > > PAGE_SHIFT. > > Top 32, really? It's 48 in later comments. Yep, I used 32 to start with but it wasn't enough. Will fix. Thanks, Nick