Hi,
I noticed atmoic_store<mode> pattern is the only one in atomic.md that uses
memory_operand as predicate.  This seems like a typo to me.  It also causes
problem.  The general address expression supported by memory_operand is kept
till LRA finds out it doesn't match the "Q" constraint.  As a result LRA
needs to reload the address expression out of memory reference.  Since there
is no combine optimizer after LRA, below inefficient code is generated for
atomic stores:
  67         add     x1, x29, 64
  68         add     x0, x1, x0, sxtw 3
  69         sub     x0, x0, #16
  70         stlr    x19, [x0]
Or:
  67         sxtw    x0, w0
  68         add     x1, x29, 48
  69         add     x1, x1, x0, sxtw 3
  70         stlr    x19, [x1]

With this patch, we force atomic_store to use direct register addressing
mode at earlier compilation phase and better code will be generated:
  67         add     x1, x29, 48
  68         add     x1, x1, x0, sxtw 3
  69         stlr    x19, [x1]

Bootstrap and test on aarch64.  Is it OK?

Thanks,
bin

2015-12-01  Bin Cheng  <bin.ch...@arm.com>

        * config/aarch64/atomics.md (atomic_store<mode>): Use predicate
        aarch64_sync_memory_operand.

diff --git a/gcc/config/aarch64/atomics.md b/gcc/config/aarch64/atomics.md
index 3c034fb..68dc27a 100644
--- a/gcc/config/aarch64/atomics.md
+++ b/gcc/config/aarch64/atomics.md
@@ -481,7 +481,7 @@
 )
 
 (define_insn "atomic_store<mode>"
-  [(set (match_operand:ALLI 0 "memory_operand" "=Q")
+  [(set (match_operand:ALLI 0 "aarch64_sync_memory_operand" "=Q")
     (unspec_volatile:ALLI
       [(match_operand:ALLI 1 "general_operand" "rZ")
        (match_operand:SI 2 "const_int_operand")]                       ;; model

Reply via email to