This patch introduces a new pattern for the match-and-simplify
machinery.

I have verified this transformation on a toy testcase (tried x and y
in the range [-1000,1000]) and it does a correct thing for all integers.

The asm diff for fn1 is
-       andl    %esi, %eax
-       orl     %edi, %esi
so clearly a win.

Bootstrapped/regtested on x86_64-linux, ok for trunk?

2015-06-11  Marek Polacek  <pola...@redhat.com>

        * match.pd ((x & y) ^ (x | y) -> x ^ y): New pattern.

        * gcc.dg/fold-xor-3.c: New test.

diff --git gcc/match.pd gcc/match.pd
index 48358a8..7a7b201 100644
--- gcc/match.pd
+++ gcc/match.pd
@@ -320,6 +320,12 @@ along with GCC; see the file COPYING3.  If not see
   (bitop:c (rbitop:c @0 @1) (bit_not@2 @0))
   (bitop @1 @2)))
 
+/* (x & y) ^ (x | y) -> x ^ y */
+(simplify
+ (bit_xor:c (bit_and@2 @0 @1) (bit_ior@3 @0 @1))
+  (if (single_use (@2) && single_use (@3))
+   (bit_xor @0 @1)))
+
 (simplify
  (abs (negate @0))
  (abs @0))
diff --git gcc/testsuite/gcc.dg/fold-xor-3.c gcc/testsuite/gcc.dg/fold-xor-3.c
index e69de29..a66f89d 100644
--- gcc/testsuite/gcc.dg/fold-xor-3.c
+++ gcc/testsuite/gcc.dg/fold-xor-3.c
@@ -0,0 +1,28 @@
+/* { dg-do compile } */
+/* { dg-options "-fdump-tree-original" } */
+
+int
+fn1 (signed int x, signed int y)
+{
+  return (x & y) ^ (x | y);
+}
+
+unsigned int
+fn2 (unsigned int x, unsigned int y)
+{
+  return (x & y) ^ (x | y);
+}
+
+int
+fn3 (signed int x, signed int y)
+{
+  return (x | y) ^ (x & y);
+}
+
+unsigned int
+fn4 (unsigned int x, unsigned int y)
+{
+  return (x | y) ^ (x & y);
+}
+
+/* { dg-final { scan-tree-dump-times "return x \\^ y;" 4 "original" } } */

        Marek

Reply via email to