On Mon, 24 Jul 2017 14:28:00 +1000 Benjamin Herrenschmidt <b...@kernel.crashing.org> wrote:
> There is no guarantee that the various isync's involved with > the context switch will order the update of the CPU mask with > the first TLB entry for the new context being loaded by the HW. > > Be safe here and add a memory barrier to order any subsequent > load/store which may bring entries into the TLB. > > The corresponding barrier on the other side already exists as > pte updates use pte_xchg() which uses __cmpxchg_u64 which has > a sync after the atomic operation. > > Signed-off-by: Benjamin Herrenschmidt <b...@kernel.crashing.org> > --- > arch/powerpc/include/asm/mmu_context.h | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/powerpc/include/asm/mmu_context.h > b/arch/powerpc/include/asm/mmu_context.h > index ed9a36ee3107..ff1aeb2cd19f 100644 > --- a/arch/powerpc/include/asm/mmu_context.h > +++ b/arch/powerpc/include/asm/mmu_context.h > @@ -110,6 +110,7 @@ static inline void switch_mm_irqs_off(struct mm_struct > *prev, > /* Mark this context has been used on the new CPU */ > if (!cpumask_test_cpu(smp_processor_id(), mm_cpumask(next))) { > cpumask_set_cpu(smp_processor_id(), mm_cpumask(next)); > + smp_mb(); > new_on_cpu = true; > } > I think this is the right thing to do, but it should be commented. Is hwsync the right barrier? (i.e., it will order the page table walk) Thanks, Nick