On Wed, Nov 11, 2020 at 11:28 AM Philipp Tomsich
<philipp.toms...@vrull.eu> wrote:
>
> From: Philipp Tomsich <p...@gnu.org>
>
> csmith managed to sneak a shift wider than the bit-width of a register
> past the frontend (found when addressing a bug in our bitmanip machine
> description): no warning is given and an unneeded shift is generated.
> This behaviour was validated for the resulting assembly both for RISC-V
> and AArch64.
>
> This matches (x << C), where C is contant and C > precicison(x), and
> rewrites it to (const_int 0).  This has been confirmed to remove the
> redundant shift instruction both for AArch64 and RISC-V.
> ---
>  gcc/match.pd | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/gcc/match.pd b/gcc/match.pd
> index 349eab6..2309175 100644
> --- a/gcc/match.pd
> +++ b/gcc/match.pd
> @@ -764,6 +764,12 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>     (cabss (ops @0))
>     (cabss @0))))
>
> +/* Fold (x << C), where C > precision(type) into 0. */
> +(simplify
> + (lshift @0 INTEGER_CST@1)
> +  (if (wi::ltu_p (TYPE_PRECISION (TREE_TYPE (@0)), wi::to_wide(@1)))

You want element_precision (@0), otherwise this breaks for vector
shifts by scalars.

Please move it in the section starting with

/* Simplifications of shift and rotates.  */

you should be able to write a testcase.  When looking at

int foo(int a)
{
  return a << 33;
}

I see the shift eliminated to zero by early constant propagation, but with
-fno-tree-ccp I see it prevails to the assembler.

Thanks,
Richard.

> +   { build_zero_cst (TREE_TYPE (@0)); } ))
> +
>  /* Fold (a * (1 << b)) into (a << b)  */
>  (simplify
>   (mult:c @0 (convert? (lshift integer_onep@1 @2)))
> --
> 1.8.3.1
>

Reply via email to