Kenneth Zadeck <zad...@naturalbridge.com> writes:
> On 12/03/2013 11:52 AM, Richard Sandiford wrote:
>> Kenneth Zadeck <zad...@naturalbridge.com> writes:
>>> Index: tree-vrp.c
>>> ===================================================================
>>> --- tree-vrp.c      (revision 205597)
>>> +++ tree-vrp.c      (working copy)
>>> @@ -2611,22 +2611,28 @@ extract_range_from_binary_expr_1 (value_
>>>   
>>>         signop sign = TYPE_SIGN (expr_type);
>>>         unsigned int prec = TYPE_PRECISION (expr_type);
>>> -      unsigned int prec2 = (prec * 2) + (sign == UNSIGNED ? 2 : 0);
>>>   
>>>         if (range_int_cst_p (&vr0)
>>>       && range_int_cst_p (&vr1)
>>>       && TYPE_OVERFLOW_WRAPS (expr_type))
>>>     {
>>> -     wide_int sizem1 = wi::mask (prec, false, prec2);
>>> -     wide_int size = sizem1 + 1;
>>> +     /* vrp_int is twice as wide as anything that the target
>>> +        supports so it can support a full width multiply.  No
>>> +        need to add any more padding for an extra sign bit
>>> +        because that comes with the way that WIDE_INT_MAX_ELTS is
>>> +        defined.  */
>>> +     typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION * 2)
>>> +       vrp_int;
>>> +     vrp_int sizem1 = wi::mask <vrp_int> (prec, false);
>>> +     vrp_int size = sizem1 + 1;
>>>   
>>>       /* Extend the values using the sign of the result to PREC2.
>>>          From here on out, everthing is just signed math no matter
>>>          what the input types were.  */
>>> -     wide_int min0 = wide_int::from (vr0.min, prec2, sign);
>>> -     wide_int max0 = wide_int::from (vr0.max, prec2, sign);
>>> -     wide_int min1 = wide_int::from (vr1.min, prec2, sign);
>>> -     wide_int max1 = wide_int::from (vr1.max, prec2, sign);
>>> +     vrp_int min0 = wi::to_vrp (vr0.min);
>>> +     vrp_int max0 = wi::to_vrp (vr0.max);
>>> +     vrp_int min1 = wi::to_vrp (vr1.min);
>>> +     vrp_int max1 = wi::to_vrp (vr1.max);
>> I think we should avoid putting to_vrp in tree.h if vrp_int is only
>> local to this block.  Instead you could have:
>>
>>        typedef generic_wide_int
>>               <wi::extended_tree <WIDE_INT_MAX_PRECISION * 2> > vrp_int_cst;
>>            ...
>>            vrp_int_cst min0 = vr0.min;
>>            vrp_int_cst max0 = vr0.max;
>>            vrp_int_cst min1 = vr1.min;
>>            vrp_int_cst max1 = vr1.max;
>>
> i did this in a different way because i had trouble doing it as you 
> suggested.    the short answer is that all of the vrp_int code is now 
> local to tree-vrp.c which i think was your primary goal

Ah, so we later assign to these variables:

          /* Canonicalize the intervals.  */
          if (sign == UNSIGNED)
            {
              if (wi::ltu_p (size, min0 + max0))
                {
                  min0 -= size;
                  max0 -= size;
                }

              if (wi::ltu_p (size, min1 + max1))
                {
                  min1 -= size;
                  max1 -= size;
                }
            }

OK, in that case I suppose a temporary is needed.  But I'd prefer
not to put local stuff in the wi:: namespace.  You could just have:

          typedef generic_wide_int
            <wi::extended_tree <WIDE_INT_MAX_PRECISION * 2> > vrp_int_cst;

          vrp_int min0 = vrp_int_cst (vr0.min);
          vrp_int max0 = vrp_int_cst (vr0.max);
          vrp_int min1 = vrp_int_cst (vr1.min);
          vrp_int max1 = vrp_int_cst (vr1.max);

which removes the need for:

+/* vrp_int is twice as wide as anything that the target supports so it
+   can support a full width multiply.  No need to add any more padding
+   for an extra sign bit because that comes with the way that
+   WIDE_INT_MAX_ELTS is defined.  */ 
+typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION * 2) vrp_int;
+namespace wi
+{
+  generic_wide_int <extended_tree <WIDE_INT_MAX_PRECISION * 2> > to_vrp 
(const_tree);
+}
+
+inline generic_wide_int <wi::extended_tree <WIDE_INT_MAX_PRECISION * 2> >
+wi::to_vrp (const_tree t)
+{
+  return t;
+}
+

>>>   #define WIDE_INT_MAX_ELTS \
>>> -  ((4 * MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT - 1) \
>>> -   / HOST_BITS_PER_WIDE_INT)
>>> +  (((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT - 1)        \
>>> +    / HOST_BITS_PER_WIDE_INT) + 1)
>> I think this should be:
>>
>>    (MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT + 1)
>>
>> We only need an extra HWI if MAX_BITSIZE_MODE_ANY_INT is an exact multiple
>> of HOST_BITS_PER_WIDE_INT.
> we will do this later when some other issues that Eric B raised are settled.

I think you're talking about defining MAX_BITSIZE_MODE_ANY_INT as
MAX_SIZE_MODE_ANY_INT * BITS_PER_UNIT, but that's orthogonal to the
change above.  IMO it doesn't make sense to both round up the division
and also add 1 to the result.  So I think we should make this change
regardless of whatever follows.

Looks good to me otherwise, thanks.

Richard

Reply via email to