On 11/18/2016 10:25 AM, Jakub Jelinek wrote:
On Fri, Nov 18, 2016 at 10:14:09AM -0700, Martin Sebor wrote:
Because people make mistakes and warnings help us avoid them (isn't
that obvious?) Just because we get it right most of the time doesn't
mean we get right all of the time. The papers and exploits I pointed
to should provide ample evidence that zero allocations are a problem
that can have serious (and costly) consequences. We (i.e., Red Hat)
recognize this risk and have made it our goal to help stem some of
these problems.
Are you talking about cases where user writes malloc (0) explicitly, or
where user writes malloc (n); and the optimizers figure out n is 0,
either always, or e.g. because the compiler decided to duplicate the code
and and in one of the branches it proves it is 0, or you want to add a
runtime warning when malloc is called with 0?
Both. The existing -Walloca-larger-than warning and the proposed
-Walloc-zero warning diagnose both. The non-constant case (i.e.,
one where the zero is the result of constant propagation) is the
more interesting (and dangerous) one but I don't think there is
value in trying to distinguish the two.
E.g. I don't see how writing malloc (0) in the code should have anything
to do with any overflows. The cases where jump threading creates cases
where we call functions with constant arguments and we then warn on them
is also problematic, often such code is even dead except the compiler is not
able to prove it.
It IMHO doesn't belong to either of these.
You've made that quite clear. I struggle to reconcile your
position in this case with the one in debate about the
-Wimplicit-fallthrough option where you insisted on the exact
opposite despite the overwhelming number of false positives
caused by it. Why is it appropriate for that option to be in
-Wextra and not this one?
It also matters what one can do to tell the compiler the code is fine.
For e.g. -Wimplicit-fallthrough and many other warnings one can add
a comment, or attribute, etc. to get the warning out of the way.
But the patch you've posted for the alloca (0) stuff contained
changes of n to n + !n, so the warning basically asks people to slow
down their code for no reason, just to get the warning out of the way.
No, it does not. There are ways to avoid the warning without
changing the value of the argument. The obvious one is to set
-Wno-alloc-zero, either on the command line or via pragma GCC
diagnostic. Another one is to call the alloca function instead
of the builtin (the function is not declared without attribute
alloc_size, at least in Glibc, but GCC still expands it inline).
This is as simple as enclosing alloca in parentheses:
void *p = (alloca)(n);
Ugly? Perhaps. One might say that code that does tricky or
unportable things is ugly for that reason alone. IMO, it's
a good thing when it also looks unusual (or even ugly). It
draws attention to itself and is less likely to be copied
or reused, or broken by those who don't realize that it's
fragile.
The specific patch we're discussing touches just 5 functions
in all of GCC, and it changes an alloca(n) to alloca(n + !n).
Your original objection was that the fix was too ugly. I'd
be happy to change it to (n + 1) if that makes it less
offensive to you (or to anything else that lets us move
forward, including the (alloca)(n) above). Either way, none
of these calls is in hot loops so the runtime impact of any
of this on GCC hardly seems worth talking about.
Having said that, after another review of the functions changed
by the patch it looks like avoiding the zero case (e.g., by
returning early or branching) might actually improve their
runtime performance (though I doubt that would be worth the
effort either). The same could well be true for other such
instances of the warning.
Martin
PS I resisted changing the XALLOCAVEC macro itself but I'm not
opposed to it and it's also a possibility.