On Tue, Jul 25, 2023 at 9:26 PM Drew Ross <drr...@redhat.com> wrote:
>
> > With that fixed I think for non-vector integrals the above is the most 
> > suitable
> > canonical form of a sign-extension.  Note it should also work for any other
> > constant shift amount - just use the appropriate intermediate precision for
> > the truncating type.
> > We _might_ want
> > to consider to only use the converts when the intermediate type has
> > mode precision (and as a special case allow one bit as in your above case)
> > so it can expand to (sign_extend:<outer> (subreg:<inner> reg)).
>
> Here is a pattern that that only matches to truncations that result in mode 
> precision (or precision of 1):
>
> (simplify
>  (rshift (nop_convert? (lshift @0 INTEGER_CST@1)) @@1)
>  (if (INTEGRAL_TYPE_P (type)
>       && !TYPE_UNSIGNED (type)
>       && wi::gt_p (element_precision (type), wi::to_wide (@1), TYPE_SIGN 
> (TREE_TYPE (@1))))
>   (with {
>     int width = element_precision (type) - tree_to_uhwi (@1);
>     tree stype = build_nonstandard_integer_type (width, 0);
>    }
>    (if (TYPE_PRECISION (stype) == 1 || type_has_mode_precision_p (stype))
>     (convert (convert:stype @0))))))
>
> Look ok?

I suppose so.  Can you see to amend the existing

/* Optimize (x << c) >> c into x & ((unsigned)-1 >> c) for unsigned
   types.  */
(simplify
 (rshift (lshift @0 INTEGER_CST@1) @1)
 (if (TYPE_UNSIGNED (type)
      && (wi::ltu_p (wi::to_wide (@1), element_precision (type))))
  (bit_and @0 (rshift { build_minus_one_cst (type); } @1))))

pattern?  You will get a duplicate pattern diagnostic otherwise.  It
also looks like this
one has the (nop_convert? ..) missing.  Btw, I wonder whether we can handle
some cases of widening/truncating converts between the shifts?

Richard.

> > You might also want to verify what RTL expansion
> > produces before/after - it at least shouldn't be worse.
>
> The RTL is slightly better for the mode precision cases and slightly worse 
> for the precision 1 case.
>
> > That said - do you have any testcase where the canonicalization is an 
> > enabler
> > for further transforms or was this requested stand-alone?
>
> No, I don't have any specific test cases. This patch is just in response to 
> pr101955.
>
> On Tue, Jul 25, 2023 at 2:55 AM Richard Biener <richard.guent...@gmail.com> 
> wrote:
>>
>> On Mon, Jul 24, 2023 at 9:42 PM Jakub Jelinek <ja...@redhat.com> wrote:
>> >
>> > On Mon, Jul 24, 2023 at 03:29:54PM -0400, Drew Ross via Gcc-patches wrote:
>> > > So would something like
>> > >
>> > > (simplify
>> > >  (rshift (nop_convert? (lshift @0 INTEGER_CST@1)) @@1)
>> > >  (with { tree stype = build_nonstandard_integer_type (1, 0); }
>> > >  (if (INTEGRAL_TYPE_P (type)
>> > >       && !TYPE_UNSIGNED (type)
>> > >       && wi::eq_p (wi::to_wide (@1), element_precision (type) - 1))
>> > >   (convert (convert:stype @0)))))
>> > >
>> > > work?
>> >
>> > Certainly swap the if and with and the (with then should be indented by 1
>> > column to the right of (if and (convert one further (the reason for the
>> > swapping is not to call build_nonstandard_integer_type when it will not be
>> > needed, which will be probably far more often then an actual match).
>>
>> With that fixed I think for non-vector integrals the above is the most 
>> suitable
>> canonical form of a sign-extension.  Note it should also work for any other
>> constant shift amount - just use the appropriate intermediate precision for
>> the truncating type.  You might also want to verify what RTL expansion
>> produces before/after - it at least shouldn't be worse.  We _might_ want
>> to consider to only use the converts when the intermediate type has
>> mode precision (and as a special case allow one bit as in your above case)
>> so it can expand to (sign_extend:<outer> (subreg:<inner> reg)).
>>
>> > As discussed privately, the above isn't what we want for vectors and the 2
>> > shifts are probably best on most arches because even when using -(x & 1) 
>> > the
>> > { 1, 1, 1, ... } vector would often needed to be loaded from memory.
>>
>> I think for vectors a vpcmpgt {0,0,0,..}, %xmm is the cheapest way of
>> producing the result.  Note that to reflect this on GIMPLE you'd need
>>
>>   _2 = _1 < { 0,0...};
>>   res = _2 ? { -1, -1, ...} : { 0, 0,...};
>>
>> because whether the ISA has a way to produce all-ones masks isn't known.
>>
>> For scalars using -(T)(_1 < 0) would also be possible.
>>
>> That said - do you have any testcase where the canonicalization is an enabler
>> for further transforms or was this requested stand-alone?
>>
>> Thanks,
>> Richard.
>>
>> >         Jakub
>> >
>>

Reply via email to