On Mon, 2017-07-24 at 04:28:00 UTC, Benjamin Herrenschmidt wrote: > There is no guarantee that the various isync's involved with > the context switch will order the update of the CPU mask with > the first TLB entry for the new context being loaded by the HW. > > Be safe here and add a memory barrier to order any subsequent > load/store which may bring entries into the TLB. > > The corresponding barrier on the other side already exists as > pte updates use pte_xchg() which uses __cmpxchg_u64 which has > a sync after the atomic operation. > > Signed-off-by: Benjamin Herrenschmidt <b...@kernel.crashing.org> > Reviewed-by: Nicholas Piggin <npig...@gmail.com>
Applied to powerpc fixes, thanks. https://git.kernel.org/powerpc/c/1a92a80ad386a1a6e3b36d576d52a1 cheers