Michael Neuling <mi...@neuling.org> writes:
> diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
> index bda8c43be7..4a19cdd8a0 100644
> --- a/arch/powerpc/mm/tlb-radix.c
> +++ b/arch/powerpc/mm/tlb-radix.c
> @@ -50,6 +50,9 @@ static inline void _tlbiel_pid(unsigned long pid, unsigned 
> long ric)
>       for (set = 0; set < POWER9_TLB_SETS_RADIX ; set++) {
>               __tlbiel_pid(pid, set, ric);
>       }
> +     if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> +             asm volatile(PPC_SLBIA(0x7)
> +                          : : :"memory");

Ah of course I'll use slbia to invalidate the ERAT.

How about we do:

#define PPC_INVALIDATE_ERAT     PPC_SLBIA(0x7)


Or bike-shed me a name for it.

cheers

Reply via email to