Jakub Jelinek via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> Hi!
>
> The testcase in the patch doesn't assemble, because the instruction requires
> that the penultimate operand (lsb) range is [0, 32] (or [0, 64]) and the last
> operand's range is [1, 32 - lsb] (or [1, 64 - lsb]).
> The INTVAL (shft_amnt) < GET_MODE_BITSIZE (mode) will accept the lsb operand
> to be in range [MIN, 32] (or [MIN, 64]) and then we invoke UB in the
> compiler and sometimes it will make it through.
> The first patch just changes that check to
> UINTVAL (shft_amnt) < GET_MODE_BITSIZE (mode)
> so that it will return false for negative shift amounts, thus there won't be
> UB in the compiler later.
> The second patch changes all the INTVAL uses in that function to UINTVAL,
> which isn't strictly necessary, but can be done (e.g. after the
> UINTVAL (shft_amnt) < GET_MODE_BITSIZE (mode) check we know it is not
> negative and thus INTVAL (shft_amnt) and UINTVAL (shft_amnt) then behave the
> same.  But, I had to add INTVAL (mask) > 0 check in that case, otherwise we
> risk (hypothetically) emitting instruction that doesn't assemble.
> The problem is with masks that have the MSB bit set, while the instruction
> can handle those, e.g.
> ubfiz w1, w0, 13, 19
> will do
> (w0 << 13) & 0xffffe000
> in RTL we represent SImode constants with MSB set as negative HOST_WIDE_INT,
> so it will actually be HOST_WIDE_INT_C (0xffffffffffffe000), and
> the instruction uses %P3 to print the last operand, which calls
> asm_fprintf (f, "%u", popcount_hwi (INTVAL (x)))
> to print that.  But that will not print 19, but 51 instead, will include
> there also all the copies of the sign bit.
> Not supporting those masks with MSB set isn't a big loss though, they really
> shouldn't appear normally, as both GIMPLE and RTL optimizations should
> optimize those away (one isn't masking any bits off with such masks, so
> just w0 << 13 will do too).
>
> Both patches were successfully bootstrapped/regtested on aarch64-linux.
> Ok for trunk and which one?

IMO the second is better.  Things like hwint.h only assume signed >>
is arithmetic shift if __GNUC__, so I guess this code should too.

> 2021-01-23  Jakub Jelinek  <ja...@redhat.com>
>
>       PR target/98681
>       * config/aarch64/aarch64.c (aarch64_mask_and_shift_for_ubfiz_p):
>       Use UINTVAL (shft_amnt) and UINTVAL (mask) instead of INTVAL (shft_amnt)
>       and INTVAL (mask).  Add && INTVAL (mask) > 0 condition.
>
>       * gcc.c-torture/execute/pr98681.c: New test.

OK, thanks.

Richard

>
> --- gcc/config/aarch64/aarch64.c.jj   2021-01-13 11:36:27.069888393 +0100
> +++ gcc/config/aarch64/aarch64.c      2021-01-22 18:53:18.611518461 +0100
> @@ -12060,10 +12060,11 @@ aarch64_mask_and_shift_for_ubfiz_p (scal
>                                   rtx shft_amnt)
>  {
>    return CONST_INT_P (mask) && CONST_INT_P (shft_amnt)
> -      && INTVAL (shft_amnt) < GET_MODE_BITSIZE (mode)
> -      && exact_log2 ((INTVAL (mask) >> INTVAL (shft_amnt)) + 1) >= 0
> -      && (INTVAL (mask)
> -          & ((HOST_WIDE_INT_1U << INTVAL (shft_amnt)) - 1)) == 0;
> +      && INTVAL (mask) > 0
> +      && UINTVAL (shft_amnt) < GET_MODE_BITSIZE (mode)
> +      && exact_log2 ((UINTVAL (mask) >> UINTVAL (shft_amnt)) + 1) >= 0
> +      && (UINTVAL (mask)
> +          & ((HOST_WIDE_INT_1U << UINTVAL (shft_amnt)) - 1)) == 0;
>  }
>  
>  /* Return true if the masks and a shift amount from an RTX of the form
> --- gcc/testsuite/gcc.c-torture/execute/pr98681.c.jj  2021-01-22 
> 16:45:05.102070501 +0100
> +++ gcc/testsuite/gcc.c-torture/execute/pr98681.c     2021-01-22 
> 16:44:34.165416961 +0100
> @@ -0,0 +1,18 @@
> +/* PR target/98681 */
> +
> +__attribute__((noipa)) int
> +foo (int x)
> +{
> +  if (x > 32)
> +    return (x << -64) & 255;
> +  else
> +    return x;
> +}
> +
> +int
> +main ()
> +{
> +  if (foo (32) != 32 || foo (-150) != -150)
> +    __builtin_abort ();
> +  return 0;
> +}

Reply via email to