On Fri, 2017-08-04 at 14:06 +0200, Frederic Barrat wrote: > > +#ifdef CONFIG_PPC_BOOK3S_64 > > +static inline int mm_is_thread_local(struct mm_struct *mm) > > +{ > > + if (atomic_read(&mm->context.active_cpus) > 1) > > + return false; > > + return cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm)); > > +} > > +#else /* CONFIG_PPC_BOOK3S_64 */ > > > While working on something related (mark memory context as needing > global TLBI if used behind a NPU or PSL): > http://patchwork.ozlabs.org/patch/796775/ > > Michael raised the point that the store for the pte update cannot be > reordered with the load which decides the scope of the TLBI, and had > convinced me that a memory barrier was required. > > Couldn't we have the same problem here, where the atomic read is > reordered with the store of the invalid PTE?
The store of the invalid PTE is done with a pte_update which contains a sync as far as I can tell. Cheers, Ben.