On 01/05/2017 08:52 PM, Martin Sebor wrote:
So Richi asked for removal of the VR_ANTI_RANGE handling, which would
imply removal of operand_signed_p.
What are the implications if we do that?
I just got back to this yesterday. The implications of the removal
of the anti-range handling are a number of false negatives in the
test suite:
I was thinking more at a higher level. ie, are the warnings still
useful if we don't have the anti-range handling? I suspect so, but
would like to hear your opinion.
...
n = ~[-4, MAX]; (I.e., n is either negative or too big.)
p = malloc (n);
Understood. The low level question is do we get these kinds of ranges
often enough in computations leading to allocation sizes?
My intuition tells me that they are likely common enough not to
disregard but I don't have a lot of data to back it up with. In
a Bash build a full 23% of all checked calls are of this kind (24
out of 106). In a Binutils build only 4% are (9 out of 228). In
Glibc, a little under 3%. My guess is that the number will be
inversely proportional to the quality of the code.
23% for bash is definitely concerning.
m = [-3, 7];
n = [-5, 11];
p = calloc (m, n);
which I suspect are common in the wild as well.
I must be missing something, given those ranges I wouldn't think we have
a false positive. The resulting size computation is going to have a
range [-35, 88], right? ISTM that we'd really want to warn for that. I
must be missing something.
The warning is meant to trigger only for cases of arguments that
are definitely too big (i.e., it's not a -Wmaybe-alloc-size-larger-
than type of warning).
OK. That's probably what I was missing. I guess I should have gone
back to the option documentation first.
So IIRC the range for any multiply is produced from the 4 cross
products. If you clamp the lower bound at 0, then 3 cross products drop
to 0 and you get a range [0, u0 * u1]
And in that case you're not warning because we don't know it's
definitely too big, right?
Let me ponder a bit too :-)
The tradeoff, of course, is false negatives. In the -Walloc-larger-
than case it can be mitigated by setting a lower threshold (I think
we might want to consider lowering the default to something less
liberal than PTRDIFF_MAX -- it seems very unlikely that a program
would try to allocate that much memory, especially in LP64).
Yea, the trick (of course) is finding a suitable value other than
PTRDIFF_MAX.
jeff