So this BZ is a case where we incorrectly indicated that the operand array was suitable for the t-head load/store pair instructions.

In particular there's a test which checks alignment, but that happens *before* we know if the operands are going to be reversed. So the routine reported the operands are suitable.

At a later point the operands have been reversed into the proper order and we realize the alignment test should have failed, resulting in the unrecognized insn.

This fixes the code by moving the reversal check earlier and actually swapping the local variables with the operands. That in turn allows for simpler testing of alignments, ordering, etc.

I've tested this on rv32 and rv64 in my tester. I don't offhand know if the patch from Filip that's been causing headaches for the RISC-V port has been reverted/fixed. So there's a nonzero chance the pre-commit CI tester will fail. I'll keep an eye on it and act appropriately.

Jeff

        PR target/116720
gcc/
        * config/riscv/thead.cc (th_mempair_operands_p): Test for
        aligned memory after swapping operands.  Simplify test for
        first memory access as well.

gcc/testsuite/
        * gcc.target/riscv/pr116720.c: New test.

diff --git a/gcc/config/riscv/thead.cc b/gcc/config/riscv/thead.cc
index 707d91076eb..baf74cffb5c 100644
--- a/gcc/config/riscv/thead.cc
+++ b/gcc/config/riscv/thead.cc
@@ -285,19 +285,27 @@ th_mempair_operands_p (rtx operands[4], bool load_p,
   if (MEM_VOLATILE_P (mem_1) || MEM_VOLATILE_P (mem_2))
     return false;
 
-  /* If we have slow unaligned access, we only accept aligned memory.  */
-  if (riscv_slow_unaligned_access_p
-      && known_lt (MEM_ALIGN (mem_1), GET_MODE_SIZE (mode) * BITS_PER_UNIT))
-    return false;
 
   /* Check if the addresses are in the form of [base+offset].  */
   bool reversed = false;
   if (!th_mempair_check_consecutive_mems (mode, &mem_1, &mem_2, &reversed))
     return false;
 
+  /* If necessary, reverse the local copy of the operands to simplify  
+     testing of alignments and mempair operand.  */
+  if (reversed)
+    {
+      std::swap (mem_1, mem_2);
+      std::swap (reg_1, reg_2);
+    }
+
+  /* If we have slow unaligned access, we only accept aligned memory.  */
+  if (riscv_slow_unaligned_access_p
+      && known_lt (MEM_ALIGN (mem_1), GET_MODE_SIZE (mode) * BITS_PER_UNIT))
+    return false;
+
   /* The first memory accesses must be a mempair operand.  */
-  if ((!reversed && !th_mempair_operand_p (mem_1, mode))
-      || (reversed && !th_mempair_operand_p (mem_2, mode)))
+  if (!th_mempair_operand_p (mem_1, mode))
     return false;
 
   /* The operands must be of the same size.  */
diff --git a/gcc/testsuite/gcc.target/riscv/pr116720.c 
b/gcc/testsuite/gcc.target/riscv/pr116720.c
new file mode 100644
index 00000000000..0f795aba0bf
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/pr116720.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32ixtheadmempair -mabi=ilp32 -mno-strict-align" 
} */
+
+struct a {
+  signed : 22;
+};
+volatile short b;
+int *c;
+void d(int e, struct a) {
+  b;
+  c = &e;
+}

Reply via email to