Ralph,
Thanks for your further explanation of this optimization. Here is what
I understand. Please correct me if I am wrong on any of these points:
1. The description in VU#162289 misrepresents the problem has a length
check. It is actually a check for wrap.
2. The optimization in this code depends on the assumption that wrapping
will not occur, or at the very least, that wrapping is undefined
behavior so the compiler is not required to generate code for this
condition.
3. Both expressions (buf+len < buf) and buf + n < buf + 100 can be
optimized assuming the subexpressions cannot wrap.
4. Both expressions cannot be optimized assuming modulo behavior, for
example if buf were cast to uintptr_t, because there are values of buf
and n where buf + n < buf + 100 evaluates to false and n < 100
evaluates to true.
5. Both code examples are examples of poor code, that could be written
better.
The test for wrap would be better written as:
if ((uintptr_t)buf+len < (uintptr_t)buf)
And your expression should just be written as:
n < 100
6. The Overview of VU#162289 states "Some C compilers optimize away
pointer arithmetic overflow tests that depend on undefined behavior
without providing a diagnostic (a warning)."
The point being is that we are not objecting to the optimization, we are
objecting to the lack of any diagnostic.
Because both examples could benefit from being rewritten (this is what
the compiler is doing) I don't see the harm in issuing a diagnostic in
both cases.
rCs
Robert,
You have failed to answer my original question, and I think have failed
to understand the point of the example.
The example shows that what you are claiming is a vulnerability in
162289 is in fact a security feature.
that removes checks pointer arithmetic wrapping.
Just to be 100% clear, the optimisation I gave you an example of, and
which you think is "pretty cool" is precisely the same optimisation you
are complaining about in 162289, specifically, a pointer value may be
canceled from both sides of a comparison.
This is slightly confused by the fact that in the 162289 example, two
optimisations take place [gcc gurus please correct if I am wrong]:
(a) a pointer value is canceled from both sides of a comparison,
changing (buf+len<buf) to (len<0). This is what changes the observable
behaviour of the code, and is what the debate is about.
(b) in the example given in 162289, (len<0) can then be evaluated at
compile time, removing the test entirely. This does not change the
runtime behaviour of the code an is completely irrelevant.
To make it even clearer, we can disable optimisation (b) while leaving
optimisation (a), by making 'len' volatile:
int foo (char * buf)
{
volatile int len = 1<<30;
if (buf+len < buf)
return 1;
else
return 0;
}
gcc then generates:
foo:
subl $16, %esp
movl $1073741824, 12(%esp)
movl 12(%esp), %eax
addl $16, %esp
shrl $31, %eax
ret
This has not completely removed the test, but when executed, it will
still always return 0, giving precisely the same run-time behaviour as
without the 'volatile'.
To re-iterate:
(a) Why does Cert advise against an optimisation that, under
the right circumstances, can remove examples of a historically
significant class of security holes.
(b) The example you claim is a length check in 162289 is not a length
check and does not support the conclusions you draw from it.
Ralph.
The optimization
doesn't actually eliminate the wrapping behavior; this still occurs.
It does, however, eliminate certain kinds of checks (that depend upon
undefined behavior).
Thanks,
rCs