----- On Jan 8, 2022, at 11:43 AM, Andy Lutomirski l...@kernel.org wrote:

> powerpc did the following on some, but not all, paths through
> switch_mm_irqs_off():
> 
>       /*
>        * Only need the full barrier when switching between processes.
>        * Barrier when switching from kernel to userspace is not
>        * required here, given that it is implied by mmdrop(). Barrier
>        * when switching from userspace to kernel is not needed after
>        * store to rq->curr.
>        */
>       if (likely(!(atomic_read(&next->membarrier_state) &
>                    (MEMBARRIER_STATE_PRIVATE_EXPEDITED |
>                     MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev))
>               return;
> 
> This is puzzling: if !prev, then one might expect that we are switching
> from kernel to user, not user to kernel, which is inconsistent with the
> comment.  But this is all nonsense, because the one and only caller would
> never have prev == NULL and would, in fact, OOPS if prev == NULL.
> 
> In any event, this code is unnecessary, since the new generic
> membarrier_finish_switch_mm() provides the same barrier without arch help.
> 
> arch/powerpc/include/asm/membarrier.h remains as an empty header,
> because a later patch in this series will add code to it.

My disagreement with "membarrier: Make the post-switch-mm barrier explicit"
may affect this patch significantly, or even make it irrelevant.

Thanks,

Mathieu

> 
> Cc: Michael Ellerman <m...@ellerman.id.au>
> Cc: Benjamin Herrenschmidt <b...@kernel.crashing.org>
> Cc: Paul Mackerras <pau...@samba.org>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: Nicholas Piggin <npig...@gmail.com>
> Cc: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Signed-off-by: Andy Lutomirski <l...@kernel.org>
> ---
> arch/powerpc/include/asm/membarrier.h | 24 ------------------------
> arch/powerpc/mm/mmu_context.c         |  1 -
> 2 files changed, 25 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/membarrier.h
> b/arch/powerpc/include/asm/membarrier.h
> index de7f79157918..b90766e95bd1 100644
> --- a/arch/powerpc/include/asm/membarrier.h
> +++ b/arch/powerpc/include/asm/membarrier.h
> @@ -1,28 +1,4 @@
> #ifndef _ASM_POWERPC_MEMBARRIER_H
> #define _ASM_POWERPC_MEMBARRIER_H
> 
> -static inline void membarrier_arch_switch_mm(struct mm_struct *prev,
> -                                          struct mm_struct *next,
> -                                          struct task_struct *tsk)
> -{
> -     /*
> -      * Only need the full barrier when switching between processes.
> -      * Barrier when switching from kernel to userspace is not
> -      * required here, given that it is implied by mmdrop(). Barrier
> -      * when switching from userspace to kernel is not needed after
> -      * store to rq->curr.
> -      */
> -     if (IS_ENABLED(CONFIG_SMP) &&
> -         likely(!(atomic_read(&next->membarrier_state) &
> -                  (MEMBARRIER_STATE_PRIVATE_EXPEDITED |
> -                   MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev))
> -             return;
> -
> -     /*
> -      * The membarrier system call requires a full memory barrier
> -      * after storing to rq->curr, before going back to user-space.
> -      */
> -     smp_mb();
> -}
> -
> #endif /* _ASM_POWERPC_MEMBARRIER_H */
> diff --git a/arch/powerpc/mm/mmu_context.c b/arch/powerpc/mm/mmu_context.c
> index 74246536b832..5f2daa6b0497 100644
> --- a/arch/powerpc/mm/mmu_context.c
> +++ b/arch/powerpc/mm/mmu_context.c
> @@ -84,7 +84,6 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct
> mm_struct *next,
>               asm volatile ("dssall");
> 
>       if (!new_on_cpu)
> -             membarrier_arch_switch_mm(prev, next, tsk);
> 
>       /*
>        * The actual HW switching method differs between the various
> --
> 2.33.1

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Reply via email to