On 10/23/2015 03:15 AM, Richard Biener wrote:
On Fri, Oct 23, 2015 at 8:22 AM, Jeff Law <l...@redhat.com> wrote:
/* This is another case of narrowing, specifically when there's an outer
BIT_AND_EXPR which masks off bits outside the type of the innermost
operands. Like the previous case we have to convert the operands
to unsigned types to avoid introducing undefined behaviour for the
arithmetic operation. */
Essentially tthat pattern in match.pd is trying to catch cases where we
widen two operands, do some arithmetic, then mask off all the bits that were
outside the width of the original operands.
In this case the mask is -2 and the inner operands are unsigned characters
that get widened to signed integers.
Obviously with a mask of -2, the we are _not_ masking off bits outside the
width of the original operands. So even if those operands are marked with
TYPE_OVERFLOW_WRAPS, this optimization must not be applied.
What's so obviously missing here is actually checking the mask.
(mask & (-1UL << TYPE_PRECISION (original operand))) == 0
Is a nice simple way to know if there's any bits outside the precision of
the original operand in the mask.
Bootstrapped and regression tested on x86_64-linux-gnu. OK for the trunk?
Thanks,
jeff
diff --git a/gcc/match.pd b/gcc/match.pd
index b399786..46188cb 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -2619,8 +2619,8 @@ along with GCC; see the file COPYING3. If not see
&& types_match (@0, @1)
&& (tree_int_cst_min_precision (@4, TYPE_SIGN (TREE_TYPE (@0)))
<= TYPE_PRECISION (TREE_TYPE (@0)))
- && (TYPE_OVERFLOW_WRAPS (TREE_TYPE (@0))
- || tree_int_cst_sgn (@4) >= 0))
+ && (TREE_INT_CST_LOW (@4)
+ & (HOST_WIDE_INT_M1U << TYPE_PRECISION (TREE_TYPE (@0)))) == 0)
Please use
&& (wi::bit_and (@4, wi::mask (TYPE_PRECISION (TREE_TYPE
(@0)), true, TYPE_PRECISON (type))) == 0)
instead. Precision might be larger than 64 thus the shift gets
undefined and TREE_INT_CST_HIGH might
contain non-zero bits.
Ok with the above change.
Made. Final patch (and ChangeLogs) attached for archival purposes.
Re-bootstrapped and regression tested on x86_64-linux-gnu & committed to
the trunk.
jeff
diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index 5aef5b7..29d1ffa 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,10 @@
+2015-10-23 Jeff Law <l...@redhat.com>
+
+ PR tree-optimization/67830
+ * match.pd ((bit_and (plus/minus (convert @0) (convert @1)) mask)):
+ Explicitly verify the mask has no bits outside the type of
+ the innermost operands.
+
2015-10-23 Gregor Richards <gregor.richa...@uwaterloo.ca>
Szabolcs Nagy <szabolcs.n...@arm.com>
diff --git a/gcc/match.pd b/gcc/match.pd
index d182f68..d6ab94e 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -2719,8 +2719,8 @@ along with GCC; see the file COPYING3. If not see
&& types_match (@0, @1)
&& (tree_int_cst_min_precision (@4, TYPE_SIGN (TREE_TYPE (@0)))
<= TYPE_PRECISION (TREE_TYPE (@0)))
- && (TYPE_OVERFLOW_WRAPS (TREE_TYPE (@0))
- || tree_int_cst_sgn (@4) >= 0))
+ && (wi::bit_and (@4, wi::mask (TYPE_PRECISION (TREE_TYPE (@0)),
+ true, TYPE_PRECISION (type))) == 0))
(if (TYPE_OVERFLOW_WRAPS (TREE_TYPE (@0)))
(with { tree ntype = TREE_TYPE (@0); }
(convert (bit_and (op @0 @1) (convert:ntype @4))))
diff --git a/gcc/testsuite/ChangeLog b/gcc/testsuite/ChangeLog
index bedb76a..fed7d10 100644
--- a/gcc/testsuite/ChangeLog
+++ b/gcc/testsuite/ChangeLog
@@ -1,3 +1,8 @@
+2015-10-23 Jeff Law <l...@redhat.com>
+
+ PR tree-optimization/67830
+ * gcc.dg/pr67830.c: New test.
+
2015-10-23 Jan Hubicka <hubi...@ucw.cz>
* gcc.dg/tree-ssa/operand-equal-2.c: New testcase.
diff --git a/gcc/testsuite/gcc.dg/pr67830.c b/gcc/testsuite/gcc.dg/pr67830.c
new file mode 100644
index 0000000..9bfb0c0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr67830.c
@@ -0,0 +1,22 @@
+/* PR tree-optimization/67830 */
+/* { dg-do run } */
+/* { dg-options "-O2" } */
+
+int a, b, *g, h;
+unsigned char c, d;
+
+int
+main ()
+{
+ int f, e = -2;
+ b = e;
+ g = &b;
+ h = c = a + 1;
+ f = d - h;
+ *g &= f;
+
+ if (b != -2)
+ __builtin_abort ();
+
+ return 0;
+}