On 01/05/2017 08:52 PM, Martin Sebor wrote:
So Richi asked for removal of the VR_ANTI_RANGE handling, which would
imply removal of operand_signed_p.

What are the implications if we do that?

I just got back to this yesterday.  The implications of the removal
of the anti-range handling are a number of false negatives in the
test suite:
I was thinking more at a higher level.  ie, are the warnings still
useful if we don't have the anti-range handling?  I suspect so, but
would like to hear your opinion.
...
  n = ~[-4, MAX];   (I.e., n is either negative or too big.)
  p = malloc (n);
Understood.  The low level question is do we get these kinds of ranges
often enough in computations leading to allocation sizes?

My intuition tells me that they are likely common enough not to
disregard but I don't have a lot of data to back it up with.  In
a Bash build a full 23% of all checked calls are of this kind (24
out of 106).  In a Binutils build only 4% are (9 out of 228).  In
Glibc, a little under 3%.  My guess is that the number will be
inversely proportional to the quality of the code.
So I think you've made a case that we do want to handle this case. So what's left is how best to avoid the infinite recursion and mitigate the pathological cases.

What you're computing seems to be "this object may have been derived from a signed type". Right? It's a property we can compute for any given SSA_NAME and it's not context/path specific beyond the context/path information encoded by the SSA graph.

So just thinking out load here, could we walk the IL once to identify call sites and build a worklist of SSA_NAMEs we care about. Then we iterate on the worklist much like Aldy's code he's working on for the unswitching vs uninitialized variable issue?

Jeff

Reply via email to