On Mon, Sep 26, 2022 at 5:25 PM Andrew Pinski via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
>
> On Sun, Sep 25, 2022 at 9:56 PM Tamar Christina <tamar.christ...@arm.com> 
> wrote:
> >
> > > -----Original Message-----
> > > From: Andrew Pinski <pins...@gmail.com>
> > > Sent: Saturday, September 24, 2022 8:57 PM
> > > To: Tamar Christina <tamar.christ...@arm.com>
> > > Cc: gcc-patches@gcc.gnu.org; nd <n...@arm.com>; rguent...@suse.de
> > > Subject: Re: [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into
> > > BIT_FIELD_REFs alone
> > >
> > > On Fri, Sep 23, 2022 at 4:43 AM Tamar Christina via Gcc-patches <gcc-
> > > patc...@gcc.gnu.org> wrote:
> > > >
> > > > Hi All,
> > > >
> > > > This adds a match.pd rule that can fold right shifts and
> > > > bit_field_refs of integers into just a bit_field_ref by adjusting the
> > > > offset and the size of the extract and adds an extend to the previous 
> > > > size.
> > > >
> > > > Concretely turns:
> > > >
> > > > #include <arm_neon.h>
> > > >
> > > > unsigned int foor (uint32x4_t x)
> > > > {
> > > >     return x[1] >> 16;
> > > > }
> > > >
> > > > which used to generate:
> > > >
> > > >   _1 = BIT_FIELD_REF <x_2(D), 32, 32>;
> > > >   _3 = _1 >> 16;
> > > >
> > > > into
> > > >
> > > >   _4 = BIT_FIELD_REF <x_1(D), 16, 48>;
> > > >   _2 = (unsigned int) _4;
> > > >
> > > > I currently limit the rewrite to only doing it if the resulting
> > > > extract is in a mode the target supports. i.e. it won't rewrite it to
> > > > extract say 13-bits because I worry that for targets that won't have a
> > > > bitfield extract instruction this may be a de-optimization.
> > >
> > > It is only a de-optimization for the following case:
> > > * vector extraction
> > >
> > > All other cases should be handled correctly in the middle-end when
> > > expanding to RTL because they need to be handled for bit-fields anyways.
> > > Plus SIGN_EXTRACT and ZERO_EXTRACT would be used in the integer case
> > > for the RTL.
> > > Getting SIGN_EXTRACT/ZERO_EXTRACT early on in the RTL is better than
> > > waiting until combine really.
> > >
> >
> > Fair enough, I've dropped the constraint.
>
> Well the constraint should be done still for VECTOR_TYPE I think.
> Attached is what I had done for left shift for integer types.
> Note the BYTES_BIG_ENDIAN part which you missed for the right shift case.

Note we formerly had BIT_FIELD_REF_UNSIGNED and allowed the precision
of the TREE_TYPE of the BIT_FIELD_REF to not match the extracted size.  That
might have mapped directly to zero/sign_extract.

Now that this is no more we should think of a canonical way to express this
and make sure we can synthesize those early.

Richard.

> Thanks,
> Andrew Pinski
>
> >
> > >
> > > >
> > > > Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-pc-linux-gnu
> > > > and no issues.
> > > >
> > > > Testcase are added in patch 2/2.
> > > >
> > > > Ok for master?
> > > >
> > > > Thanks,
> > > > Tamar
> > > >
> > > > gcc/ChangeLog:
> > > >
> > > >         * match.pd: Add bitfield and shift folding.
> > > >
> > > > --- inline copy of patch --
> > > > diff --git a/gcc/match.pd b/gcc/match.pd index
> > > >
> > > 1d407414bee278c64c00d425d9f025c1c58d853d..b225d36dc758f1581502c8d03
> > > 761
> > > > 544bfd499c01 100644
> > > > --- a/gcc/match.pd
> > > > +++ b/gcc/match.pd
> > > > @@ -7245,6 +7245,23 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
> > > >        && ANY_INTEGRAL_TYPE_P (type) && ANY_INTEGRAL_TYPE_P
> > > (TREE_TYPE(@0)))
> > > >    (IFN_REDUC_PLUS_WIDEN @0)))
> > > >
> > > > +/* Canonicalize BIT_FIELD_REFS and shifts to BIT_FIELD_REFS.  */ (for
> > > > +shift (rshift)
> > > > +     op (plus)
> > > > + (simplify
> > > > +  (shift (BIT_FIELD_REF @0 @1 @2) integer_pow2p@3)
> > > > +  (if (INTEGRAL_TYPE_P (type))
> > > > +   (with { /* Can't use wide-int here as the precision differs between
> > > > +             @1 and @3.  */
> > > > +          unsigned HOST_WIDE_INT size = tree_to_uhwi (@1);
> > > > +          unsigned HOST_WIDE_INT shiftc = tree_to_uhwi (@3);
> > > > +          unsigned HOST_WIDE_INT newsize = size - shiftc;
> > > > +          tree nsize = wide_int_to_tree (bitsizetype, newsize);
> > > > +          tree ntype
> > > > +            = build_nonstandard_integer_type (newsize, 1); }
> > >
> > > Maybe use `build_nonstandard_integer_type (newsize, /* unsignedp = */
> > > true);` or better yet `build_nonstandard_integer_type (newsize,
> > > UNSIGNED);`
> >
> > Ah, will do,
> > Tamar.
> >
> > >
> > > I had started to convert some of the unsignedp into enum signop but I 
> > > never
> > > finished or submitted the patch.
> > >
> > > Thanks,
> > > Andrew Pinski
> > >
> > >
> > > > +    (if (ntype)
> > > > +     (convert:type (BIT_FIELD_REF:ntype @0 { nsize; } (op @2
> > > > + @3))))))))
> > > > +
> > > >  (simplify
> > > >   (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4)
> > > >   (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4);
> > > > }))
> > > >
> > > >
> > > >
> > > >
> > > > --

Reply via email to