> So when computing a range for z in > > z = y - x; > > with x = [-INF, y - 1] and y = [x + 1, +INF] (deduced from !(x >= y)) we > fail to do sth sensible with [y, y] - [-INF, y - 1] or [x + 1, +INF] - [x, > x] but we do sth with [x + 1, +INF] - [-INF, x]? That seems odd to me. > > With the patch we compute z to [1, +INF(OVF)]
Right, and note the overflow. > Going the [x + 1, +INF] - [x,x] path first we obtain > > [1, -x + INF] > > which fails the sanity checking > > cmp = compare_values (min, max); > if (cmp == -2 || cmp == 1) > { > /* If the new range has its limits swapped around (MIN > MAX), > then the operation caused one of them to wrap around, mark > the new range VARYING. */ > set_value_range_to_varying (vr); > } > else > set_value_range (vr, type, min, max, NULL); > > but clearly the same reasoning you can apply that makes trying > with [-INF, x] valid (it's just enlarging the range) can be applied > here, too, when computing +INF - x for the upper bound. We can > safely increase that to +INF making the range valid for the above > test. I don't think we can enlarge to +INF because -x + INF can overflow, we can only enlarge to +INF(OVF). > But I wonder what code path in the routine still relies on that sanity > checking to produce a valid result (so I'd rather try removing it, or > taking uncomparable bounds as a valid range). > > Simplest would be to simply do > > set_value_range (vr, type, min, max, NULL); > return; > > and be done with that in the plus/minus handling. With that the > testcase optimizes ok for me. With [1, -x + INF] as the resulting range? But it can be bogus if x is itself equal to +INF (unlike the input range [x + 1, +INF] which is always correct) so this doesn't look valid to me. I don't see how we can get away without a +INF(OVF) here, but I can compute it in extract_range_from_binary_expr_1 if you prefer and try only [op0,op0] and [op1,op1]. -- Eric Botcazou