Jakub Jelinek <ja...@redhat.com> writes:
> On Thu, Oct 12, 2023 at 11:54:14AM +0100, Richard Sandiford wrote:
>> Jakub Jelinek <ja...@redhat.com> writes:
>> > @@ -2036,11 +2075,20 @@ wi::lrshift_large (HOST_WIDE_INT *val, c
>> >               unsigned int xlen, unsigned int xprecision,
>> >               unsigned int precision, unsigned int shift)
>> >  {
>> > -  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, 
>> > shift);
>> > +  /* Work out how many blocks are needed to store the significant bits
>> > +     (excluding the upper zeros or signs).  */
>> > +  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
>> > +  unsigned int len = blocks_needed;
>> > +  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)
>> > +      && len > xlen
>> > +      && xval[xlen - 1] >= 0)
>> > +    len = xlen;
>> 
>> I think here too it would be worth dropping the:
>> 
>>   UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)
>> 
>> part of the condition, since presumably the change should be safe
>> regardless of that.
>
> If so, there is also one spot in lshift_large as well.  So incrementally:
>
> --- gcc/wide-int.cc   2023-10-11 14:41:23.719132402 +0200
> +++ gcc/wide-int.cc   2023-10-11 14:41:23.719132402 +0200
> @@ -2013,8 +2013,7 @@
>  
>    /* The whole-block shift fills with zeros.  */
>    unsigned int len = BLOCKS_NEEDED (precision);
> -  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
> -    len = xlen + skip + 1;
> +  len = MIN (xlen + skip + 1, len);
>    for (unsigned int i = 0; i < skip; ++i)
>      val[i] = 0;
>  
> @@ -2079,9 +2078,7 @@
>       (excluding the upper zeros or signs).  */
>    unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
>    unsigned int len = blocks_needed;
> -  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)
> -      && len > xlen
> -      && xval[xlen - 1] >= 0)
> +  if (len > xlen && xval[xlen - 1] >= 0)
>      len = xlen;
>  
>    rshift_large_common (val, xval, xlen, shift, len);
> @@ -2114,9 +2111,7 @@
>    /* Work out how many blocks are needed to store the significant bits
>       (excluding the upper zeros or signs).  */
>    unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
> -  unsigned int len = blocks_needed;
> -  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) && len > xlen)
> -    len = xlen;
> +  unsigned int len = MIN (xlen, blocks_needed);
>  
>    rshift_large_common (val, xval, xlen, shift, len);
>  
> which I'll test soon.

LGTM.

>> OK for thw wide-int parts with those changes.
>
> Thanks.  What do you think about that
> --- gcc/wide-int.h.jj 2023-10-11 12:05:47.718059477 +0200
> +++ gcc/wide-int.h    2023-10-11 13:51:56.081552500 +0200
> @@ -1635,6 +1635,8 @@ widest_int_storage <N>::write_val (unsig
>        u.valp = XNEWVEC (HOST_WIDE_INT, l);
>        return u.valp;
>      }
> +  else if (CHECKING_P && l < WIDE_INT_MAX_INL_ELTS)
> +    u.val[l] = HOST_WIDE_INT_UC (0xbaaaaaaddeadbeef);
>    return u.val;
>  }
>  
> @@ -1650,6 +1652,9 @@ widest_int_storage <N>::set_len (unsigne
>        memcpy (u.val, valp, l * sizeof (u.val[0]));
>        XDELETEVEC (valp);
>      }
> +  else if (len && len < WIDE_INT_MAX_INL_ELTS)
> +    gcc_checking_assert ((unsigned HOST_WIDE_INT) u.val[len]
> +                      == HOST_WIDE_INT_UC (0xbaaaaaaddeadbeef));
>    len = l;
>    /* There are no excess bits in val[len - 1].  */
>    STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
>
> part, shall that go into trunk as well or is that too much slowdown
> for checking builds?

I don't have a good intuition about how big the slowdown will be,
but FWIW I agree with Richi that it'd be better to include the change.
We can always take it out again if it proves to be unexpectedly expensive.

Thanks,
Richard

Reply via email to