On 02/07/2017 01:39 AM, Richard Biener wrote:
On Mon, Feb 6, 2017 at 10:57 PM, Jeff Law <l...@redhat.com> wrote:
On 02/06/2017 08:33 AM, Richard Biener wrote:

ah, indeed vr0type is VR_ANTI_RANGE and yes we have the case of a
range with an anti-range "inside".  This also covers [-1,1] v
~[0,0] where you choose the much larger anti-range now.  So at
least we want to have some idea about the sizes of the ranges
(ideally we'd choose the smaller though for most further
propagations anti-ranges often degenerate to varying...)

vr0 as an anti-singleton range like ~[0,0] is the only one likely
of any interest right now and that's always going to have a range
that is all but one value :-)

vr1 is the tricky case.  We could do v1.max - vr1.min and if that
overflows or is some "large" value (say > 65536 just to throw out a
value), then we conclude creating the singleton anti-range like
~[0,0] is more useful.

Yes, but it's hard to tell.  As said, anti-ranges quickly degrade in
further propagation and I fear that without a better range
representation it's hard to do better in all cases here.
All very true. What I threw into testing overnight was the >65536 heuristic. While I don't expect we're going to see any testing failures, I also suspect we don't have a lot of cases where wide ranges are all that useful and even less testsuite coverage of those cases.

One more targeted approach would be to *wipe* the old range when it is wide and we discover ~[0,0] as a new range. We could do this just for EXACT_DIV_EXPR. We'd do this much earlier than vrp_intersect so that we'd still have enough context to know we're dealing with EXACT_DIV_EXPR. It's obviously much less likely to have side effects, but feels even more like a hack to me.

*If* we come up with something useful, it might suggest a different approach to even discovering the ~[0,0] case for EXACT_DIV_EXPR. Part of the problem is we have a numerator of ~[0,0]. We covert that into [MININT,-1] U [1,MAXINT] Then apply the denominator's range to both generating [MININT/4,0] U [0, MAXINT/4], which we convert to [MININT/4,MAXINT/4]

But the initial conversion of the numerator's range is imprecise for EXACT_DIV_EXPR. If we mask out the low order bits at conversion time we end up generating the (reasonable) [MININT/4,-1] U [1,MAXINT/4]. Then we union those together into [MININT/4,MAXINT/4], but if we had a good heuristic we might be able to realize ~[0,0] is better.



 The fact is
we can't represent the result of the intersection and thus we have to
conservatively choose an approximation.
Right. There's no way apriori to absolutely know which range representation is better. Though we have some sense that a wide range is not generally useful (for some definition of wide) and a singleton anti-range sometimes has value (because it can be used to propagate simple equivalences). We could approximate the latter by looking at uses of the SSA_NAME and guess which of those would benefit from the propagation enabled by a singleton anti-range. That sounds painful though.


 Sometimes we have the other
range on an SSA name and thus can use equivalences (when coming from
assert processing), but most of the time not and thus we can't use
equivalences (which use SSA name versions rather than an index into
a ranges array - one possible improvement to the range
representation).
Right.

 Iff ~[0,0] is useful information querying sth for
non-null should also look at equivalences btw.
~[0,0] is primarily useful in pointer contexts -- null pointer check elimination and pointer arithmetic. Outside those contexts I doubt it has all that much value.

Sadly, I don't think we can rely on types either -- IIRC everything is casted to a generic integer type, so we can't key on ptrdiff_t.

Jeff

Reply via email to