Hi All,

This adds a match.pd rule that can fold right shifts and bit_field_refs of
integers into just a bit_field_ref by adjusting the offset and the size of the
extract and adds an extend to the previous size.

Concretely turns:

#include <arm_neon.h>

unsigned int foor (uint32x4_t x)
{
    return x[1] >> 16;
}

which used to generate:

  _1 = BIT_FIELD_REF <x_2(D), 32, 32>;
  _3 = _1 >> 16;

into

  _4 = BIT_FIELD_REF <x_1(D), 16, 48>;
  _2 = (unsigned int) _4;

I currently limit the rewrite to only doing it if the resulting extract is in
a mode the target supports. i.e. it won't rewrite it to extract say 13-bits
because I worry that for targets that won't have a bitfield extract instruction
this may be a de-optimization.

Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-pc-linux-gnu
and no issues.

Testcase are added in patch 2/2.

Ok for master?

Thanks,
Tamar

gcc/ChangeLog:

        * match.pd: Add bitfield and shift folding.

--- inline copy of patch -- 
diff --git a/gcc/match.pd b/gcc/match.pd
index 
1d407414bee278c64c00d425d9f025c1c58d853d..b225d36dc758f1581502c8d03761544bfd499c01
 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -7245,6 +7245,23 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
       && ANY_INTEGRAL_TYPE_P (type) && ANY_INTEGRAL_TYPE_P (TREE_TYPE(@0)))
   (IFN_REDUC_PLUS_WIDEN @0)))
 
+/* Canonicalize BIT_FIELD_REFS and shifts to BIT_FIELD_REFS.  */
+(for shift (rshift)
+     op (plus)
+ (simplify
+  (shift (BIT_FIELD_REF @0 @1 @2) integer_pow2p@3)
+  (if (INTEGRAL_TYPE_P (type))
+   (with { /* Can't use wide-int here as the precision differs between
+             @1 and @3.  */
+          unsigned HOST_WIDE_INT size = tree_to_uhwi (@1);
+          unsigned HOST_WIDE_INT shiftc = tree_to_uhwi (@3);
+          unsigned HOST_WIDE_INT newsize = size - shiftc;
+          tree nsize = wide_int_to_tree (bitsizetype, newsize);
+          tree ntype
+            = build_nonstandard_integer_type (newsize, 1); }
+    (if (ntype)
+     (convert:type (BIT_FIELD_REF:ntype @0 { nsize; } (op @2 @3))))))))
+
 (simplify
  (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4)
  (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4); }))




-- 
diff --git a/gcc/match.pd b/gcc/match.pd
index 
1d407414bee278c64c00d425d9f025c1c58d853d..b225d36dc758f1581502c8d03761544bfd499c01
 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -7245,6 +7245,23 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
       && ANY_INTEGRAL_TYPE_P (type) && ANY_INTEGRAL_TYPE_P (TREE_TYPE(@0)))
   (IFN_REDUC_PLUS_WIDEN @0)))
 
+/* Canonicalize BIT_FIELD_REFS and shifts to BIT_FIELD_REFS.  */
+(for shift (rshift)
+     op (plus)
+ (simplify
+  (shift (BIT_FIELD_REF @0 @1 @2) integer_pow2p@3)
+  (if (INTEGRAL_TYPE_P (type))
+   (with { /* Can't use wide-int here as the precision differs between
+             @1 and @3.  */
+          unsigned HOST_WIDE_INT size = tree_to_uhwi (@1);
+          unsigned HOST_WIDE_INT shiftc = tree_to_uhwi (@3);
+          unsigned HOST_WIDE_INT newsize = size - shiftc;
+          tree nsize = wide_int_to_tree (bitsizetype, newsize);
+          tree ntype
+            = build_nonstandard_integer_type (newsize, 1); }
+    (if (ntype)
+     (convert:type (BIT_FIELD_REF:ntype @0 { nsize; } (op @2 @3))))))))
+
 (simplify
  (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4)
  (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4); }))



Reply via email to