3.16.61-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Andre Przywara <andre.przyw...@arm.com>

commit 878a84d5a8a18a4ab241d40cebb791d6aedf5605 upstream.

Commit 8053871d0f7f ("smp: Fix smp_call_function_single_async()
locking") introduced a call to smp_load_acquire() with a u16 argument,
but we only cared about u32 and u64 types in that function so far.
This resulted in a compiler warning fortunately, pointing at an
uninitialized use. Due to the implementation structure the compiler
misses that bug in the smp_store_release(), though.
Add the u16 and u8 variants using ldarh/stlrh and ldarb/stlrb,
respectively. Together with the compiletime_assert_atomic_type() check
this should cover all cases now.

Acked-by: Will Deacon <will.dea...@arm.com>
Signed-off-by: Andre Przywara <andre.przyw...@arm.com>
Signed-off-by: Will Deacon <will.dea...@arm.com>
Signed-off-by: Ben Hutchings <b...@decadent.org.uk>
---
 arch/arm64/include/asm/barrier.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -64,6 +64,14 @@ do {                                                         
        \
                { .__val = (__force typeof(*p)) (v) };                  \
        compiletime_assert_atomic_type(*p);                             \
        switch (sizeof(*p)) {                                           \
+       case 1:                                                         \
+               asm volatile ("stlrb %w1, %0"                           \
+                               : "=Q" (*p) : "r" (v) : "memory");      \
+               break;                                                  \
+       case 2:                                                         \
+               asm volatile ("stlrh %w1, %0"                           \
+                               : "=Q" (*p) : "r" (v) : "memory");      \
+               break;                                                  \
        case 4:                                                         \
                asm volatile ("stlr %w1, %0"                            \
                                : "=Q" (*p)                             \
@@ -84,6 +92,14 @@ do {                                                         
        \
        typeof(*p) ___p1;                                               \
        compiletime_assert_atomic_type(*p);                             \
        switch (sizeof(*p)) {                                           \
+       case 1:                                                         \
+               asm volatile ("ldarb %w0, %1"                           \
+                       : "=r" (___p1) : "Q" (*p) : "memory");          \
+               break;                                                  \
+       case 2:                                                         \
+               asm volatile ("ldarh %w0, %1"                           \
+                       : "=r" (___p1) : "Q" (*p) : "memory");          \
+               break;                                                  \
        case 4:                                                         \
                asm volatile ("ldar %w0, %1"                            \
                        : "=r" (___p1) : "Q" (*p) : "memory");          \

Reply via email to