On Tue, Jun 3, 2014 at 10:11 AM, Eric Botcazou <ebotca...@adacore.com> wrote:
>> Looks mostly ok.  Any reason why you are not re-creating
>> MINUS_EXPR in build_symbolic_expr?  That is, build
>> inv - t (for non-pointers, of course)?
>
> It's more uniform and compare_values expects an INTEGER_CST on the RHS,
> although the patch was lacking a small tweak to the function:
>
> @@ -1205,19 +1292,23 @@ compare_values_warnv (tree val1, tree va
>    STRIP_USELESS_TYPE_CONVERSION (val2);
>
>    if ((TREE_CODE (val1) == SSA_NAME
> +       || (TREE_CODE (val1) == NEGATE_EXPR
> +          && TREE_CODE (TREE_OPERAND (val1, 0)) == SSA_NAME)
>         || TREE_CODE (val1) == PLUS_EXPR
>         || TREE_CODE (val1) == MINUS_EXPR)
>        && (TREE_CODE (val2) == SSA_NAME
> +         || (TREE_CODE (val2) == NEGATE_EXPR
> +             && TREE_CODE (TREE_OPERAND (val2, 0)) == SSA_NAME)
>           || TREE_CODE (val2) == PLUS_EXPR
>           || TREE_CODE (val2) == MINUS_EXPR))
>      {
>        tree n1, c1, n2, c2;
>        enum tree_code code1, code2;
>
> -      /* If VAL1 and VAL2 are of the form 'NAME [+-] CST' or 'NAME',
> +      /* If VAL1 and VAL2 are of the form '[-]NAME [+-] CST' or 'NAME',
>          return -1 or +1 accordingly.  If VAL1 and VAL2 don't use the
>          same name, return -2.  */
> -      if (TREE_CODE (val1) == SSA_NAME)
> +      if (TREE_CODE (val1) == SSA_NAME || TREE_CODE (val1) == NEGATE_EXPR)
>         {
>           code1 = SSA_NAME;
>           n1 = val1;
> @@ -1239,7 +1330,7 @@ compare_values_warnv (tree val1, tree va
>             }
>         }
>
> -      if (TREE_CODE (val2) == SSA_NAME)
> +      if (TREE_CODE (val2) == SSA_NAME || TREE_CODE (val2) == NEGATE_EXPR)
>         {
>           code2 = SSA_NAME;
>           n2 = val2;
> @@ -1262,6 +1353,11 @@ compare_values_warnv (tree val1, tree va
>         }
>
>        /* Both values must use the same name.  */
> +      if (TREE_CODE (n1) == NEGATE_EXPR && TREE_CODE (n2) == NEGATE_EXPR)
> +       {
> +         n1 = TREE_OPERAND (n1, 0);
> +         n2 = TREE_OPERAND (n2, 0);
> +       }
>        if (n1 != n2)
>         return -2;

Ah, ok - makes sense.

>> Otherwise if a range becomes -t + inv that will no longer match
>> get_single_symbol for further propagation?
>
> Yes, it will, NEGATE_EXPR is handled by get_single_symbol.

Now spotted.

>> Then I'm not sure if
>>
>> +      /* Try with VR0 and [-INF, OP1].  */
>> +      set_value_range (&new_vr1, VR_RANGE, vrp_val_min (expr_type), op1,
>> NULL); +      extract_range_from_binary_expr_1 (vr, code, expr_type, &vr0,
>> &new_vr1); +      if (vr->type != VR_VARYING)
>> +       return;
>> +
>> +      /* Try with VR0 and [OP1, +INF].  */
>> +      set_value_range (&new_vr1, VR_RANGE, op1, vrp_val_max (expr_type),
>> NULL); +      extract_range_from_binary_expr_1 (vr, code, expr_type, &vr0,
>> &new_vr1); +      if (vr->type != VR_VARYING)
>> +       return;
>>
>> is a safe thing to do.  If it does make a difference to try [-INF, OP1],
>> [OP1, +INF] instead of just [OP1, OP1] then at least it's very suspicious ;)
>> (or an "easy" missed optimization).
>
> Yes, it makes a difference and is required to eliminate half-symbolic ranges
> (the ones deduced from x >= y).  Otherwise extract_range_from_binary_expr_1
> will reject the resulting range because it cannot compare the bounds.
>
> As discussed, extract_range_from_binary_expr_1 doesn't do any fiddling with
> the input ranges, it just computes the resulting range and see whether it can
> interpret it.  Half-symbolic ranges cannot be interpreted and probably should
> not, in the general case at least, because I think it can be very delicate, so
> extract_range_from_binary_expr will just try all the possible combinations for
> PLUS and MINUS.

So when computing a range for z in

 z = y - x;

with x = [-INF, y - 1] and y = [x + 1, +INF] (deduced from !(x >= y)) we fail to
do sth sensible with [y, y] - [-INF, y - 1] or [x + 1, +INF] - [x, x] but we do
sth with [x + 1, +INF] - [-INF, x]?  That seems odd to me.

With the patch we compute z to [1, +INF(OVF)]

Going the [x + 1, +INF] - [x,x] path first we obtain

  [1, -x + INF]

which fails the sanity checking

  cmp = compare_values (min, max);
  if (cmp == -2 || cmp == 1)
    {
      /* If the new range has its limits swapped around (MIN > MAX),
         then the operation caused one of them to wrap around, mark
         the new range VARYING.  */
      set_value_range_to_varying (vr);
    }
  else
    set_value_range (vr, type, min, max, NULL);

but clearly the same reasoning you can apply that makes trying
with [-INF, x] valid (it's just enlarging the range) can be applied
here, too, when computing +INF - x for the upper bound.  We can
safely increase that to +INF making the range valid for the above
test.

But I wonder what code path in the routine still relies on that sanity
checking to produce a valid result (so I'd rather try removing it, or
taking uncomparable bounds as a valid range).

Simplest would be to simply do

          set_value_range (vr, type, min, max, NULL);
          return;

and be done with that in the plus/minus handling.  With that the
testcase optimizes ok for me.

I'd say ok with that change and only trying [op0,op0] and [op1,op1].

Thanks,
Richard.

>
> The testcases were attached to the first message in the thread, but I presume
> that decent mail programs are a thing of the past. :-)  Attached again.

Heh ;)

> --
> Eric Botcazou

Reply via email to