On 9/2/24 2:01 PM, Raphael Moreira Zinsly wrote:
Improve handling of large constants in riscv_build_integer, generate
better code for constants where the high half can be constructed
by shifting/shiftNadding the low half or if the halves differ by less
than 2k.

gcc/ChangeLog:
        * config/riscv/riscv.cc (riscv_build_integer): Detect new case
        of constants that can be improved.
        (riscv_move_integer): Add synthesys for concatening constants
        without Zbkb.

gcc/testsuite/ChangeLog:
        * gcc.target/riscv/synthesis-12.c: New test.
        * gcc.target/riscv/synthesis-13.c: New test.
        * gcc.target/riscv/synthesis-14.c: New test.
---
  gcc/config/riscv/riscv.cc                     | 140 +++++++++++++++++-
  gcc/testsuite/gcc.target/riscv/synthesis-12.c |  26 ++++
  gcc/testsuite/gcc.target/riscv/synthesis-13.c |  26 ++++
  gcc/testsuite/gcc.target/riscv/synthesis-14.c |  28 ++++
  4 files changed, 214 insertions(+), 6 deletions(-)
  create mode 100644 gcc/testsuite/gcc.target/riscv/synthesis-12.c
  create mode 100644 gcc/testsuite/gcc.target/riscv/synthesis-13.c
  create mode 100644 gcc/testsuite/gcc.target/riscv/synthesis-14.c

diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index b963a57881e..64d5611cbd2 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1231,6 +1231,124 @@ riscv_build_integer (struct riscv_integer_op *codes, 
HOST_WIDE_INT value,
        }
}
+  else if (cost > 4 && TARGET_64BIT && can_create_pseudo_p ()
+          && allow_new_pseudos)
+    {
+      struct riscv_integer_op alt_codes[RISCV_MAX_INTEGER_OPS];
+      int alt_cost;
+
+      unsigned HOST_WIDE_INT loval = value & 0xffffffff;
+      unsigned HOST_WIDE_INT hival = (value & ~loval) >> 32;
+      bool bit31 = (hival & 0x80000000) != 0;
+      int trailing_shift = ctz_hwi (loval) - ctz_hwi (hival);
+      int leading_shift = clz_hwi (loval) - clz_hwi (hival);
+      int shiftval = 0;
+
+      /* Adjust the shift into the high half accordingly.  */
+      if ((trailing_shift > 0 && hival == (loval >> trailing_shift))
+         || (trailing_shift < 0 && hival == (loval << trailing_shift)))
+       shiftval = 32 - trailing_shift;
+      else if ((leading_shift < 0 && hival == (loval >> leading_shift))
+               || (leading_shift > 0 && hival == (loval << leading_shift)))
Don't these trigger undefined behavior when tailing_shift or leading_shift is < 0? We shouldn't ever generate negative shift counts.

Generally looks pretty good, but we do need to get those negative shifts fixed before integration.

jeff

Reply via email to