Hi All,

This fixes a fall-out from a patch I had submitted two years ago which started
allowing simplify-rtx to fold logical right shifts by offsets a followed by b
into >> (a + b).

However this can generate inefficient code when the resulting shift count ends
up being the same as the size of the shift mode.  This will create some
undefined behavior on most platforms.

This patch changes to code to truncate to 0 if the shift amount goes out of
range.  Before my older patch this used to happen in combine when it saw the
two shifts.  However since we combine them here combine never gets a chance to
truncate them.

The issue mostly affects GCC 8 and 9 since on 10 the back-end knows how to deal
with this shift constant but it's better to do the right thing in simplify-rtx.

Note that this doesn't take care of the Arithmetic shift where you could replace
the constant with MODE_BITS (mode) - 1, but that's not a regression so punting 
it.

Bootstrapped Regtested on aarch64-none-linux-gnu and x86_64-pc-linux-gnu
with no issues.

Ok for trunk? and backport to GCC 8 and 9 with some stew?

Thanks,
Tamar

gcc/ChangeLog:

2020-01-31  Tamar Christina  <tamar.christ...@arm.com>

        PR 91838
        * simplify-rtx.c (simplify_binary_operation_1): Update LSHIFTRT case
        to truncate if allowed or reject combination.

gcc/testsuite/ChangeLog:

2020-01-31  Tamar Christina  <tamar.christ...@arm.com>

        PR 91838
        * g++.dg/pr91838.C: New test.

-- 
diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c
index eff1d07a2533c7bda5f0529cd318f08e6d5209d6..543cd5885105fb0e4568568a3c876c74cc1068bf 100644
--- a/gcc/simplify-rtx.c
+++ b/gcc/simplify-rtx.c
@@ -3647,9 +3647,21 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
 	{
 	  rtx tmp = gen_int_shift_amount
 	    (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1));
-	  tmp = simplify_gen_binary (code, inner_mode,
-				     XEXP (SUBREG_REG (op0), 0),
-				     tmp);
+
+	 /* Combine would usually zero out the value when combining two
+	    local shifts and the range becomes larger or equal to the mode.
+	    However since we fold away one of the shifts here combine won't
+	    see it so we should immediately truncate the shift if it's out of
+	    range.  */
+	 if (code == LSHIFTRT
+	     && INTVAL (tmp) >= GET_MODE_BITSIZE (inner_mode))
+	  tmp = const0_rtx;
+	 else
+	   tmp = simplify_gen_binary (code,
+				      inner_mode,
+				      XEXP (SUBREG_REG (op0), 0),
+				      tmp);
+
 	  return lowpart_subreg (int_mode, tmp, inner_mode);
 	}
 
diff --git a/gcc/testsuite/g++.dg/pr91838.C b/gcc/testsuite/g++.dg/pr91838.C
new file mode 100644
index 0000000000000000000000000000000000000000..4dbaef05ce84770e1c8726dd501b40309a352aaf
--- /dev/null
+++ b/gcc/testsuite/g++.dg/pr91838.C
@@ -0,0 +1,11 @@
+/* { dg-do compile } */
+/* { dg-additional-options "-O2" } */
+/* { dg-skip-if "" { *-*-* } {-std=c++98} } */
+
+using T = unsigned char; // or ushort, or uint
+using V [[gnu::vector_size(8)]] = T;
+V f(V x) {
+  return x >> 8 * sizeof(T);
+}
+
+/* { dg-final { scan-assembler {pxor\s+%xmm0,\s+%xmm0} { target x86_64-*-* } } } */

Reply via email to