Le 24/02/2023 à 11:39, Nysal Jan K.A a écrit : > [Vous ne recevez pas souvent de courriers de ny...@linux.ibm.com. Découvrez > pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ] > > Remove arch_atomic_try_cmpxchg_lock function as it is no longer used > since commit 9f61521c7a28 ("powerpc/qspinlock: powerpc qspinlock > implementation") > > Signed-off-by: Nysal Jan K.A <ny...@linux.ibm.com>
Reviewed-by: Christophe Leroy <christophe.le...@csgroup.eu> > --- > arch/powerpc/include/asm/atomic.h | 29 ----------------------------- > 1 file changed, 29 deletions(-) > > diff --git a/arch/powerpc/include/asm/atomic.h > b/arch/powerpc/include/asm/atomic.h > index 486ab7889121..b3a53830446b 100644 > --- a/arch/powerpc/include/asm/atomic.h > +++ b/arch/powerpc/include/asm/atomic.h > @@ -130,35 +130,6 @@ ATOMIC_OPS(xor, xor, "", K) > #define arch_atomic_xchg_relaxed(v, new) \ > arch_xchg_relaxed(&((v)->counter), (new)) > > -/* > - * Don't want to override the generic atomic_try_cmpxchg_acquire, because > - * we add a lock hint to the lwarx, which may not be wanted for the > - * _acquire case (and is not used by the other _acquire variants so it > - * would be a surprise). > - */ > -static __always_inline bool > -arch_atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new) > -{ > - int r, o = *old; > - unsigned int eh = IS_ENABLED(CONFIG_PPC64); > - > - __asm__ __volatile__ ( > -"1: lwarx %0,0,%2,%[eh] # atomic_try_cmpxchg_acquire \n" > -" cmpw 0,%0,%3 \n" > -" bne- 2f \n" > -" stwcx. %4,0,%2 \n" > -" bne- 1b \n" > -"\t" PPC_ACQUIRE_BARRIER " \n" > -"2: \n" > - : "=&r" (r), "+m" (v->counter) > - : "r" (&v->counter), "r" (o), "r" (new), [eh] "n" (eh) > - : "cr0", "memory"); > - > - if (unlikely(r != o)) > - *old = r; > - return likely(r == o); > -} > - > /** > * atomic_fetch_add_unless - add unless the number is a given value > * @v: pointer of type atomic_t > -- > 2.39.2 >