Paolo Bonzini <[EMAIL PROTECTED]> writes: > On the autoconf mailing list, Paul Eggert mentioned as a good > compromise that GCC could treat signed overflow as undefined only for > loops and not in general.
What I meant to propose (and perhaps did not propose clearly enough) is that if the C application is checking for integer overflow, GCC should not optimize that code away; but it is OK for GCC to do other loop optimizations. That is, some overflow checking is done in loops, and GCC should not optimize that away, but the other loop optimizations are OK. That probably sounds vague, so here's the code that beta gcc -O2 actually broke (which started this whole thread): int j; for (j = 1; 0 < j; j *= 2) if (! bigtime_test (j)) return 1; Here it is obvious to a programmer that the comparison is intended to do overflow checking, even though the test controls the loop. Can gcc -O2 be made "smart" enough to notice this, and not optimize away the test? Another question for the GCC experts: would it fix the bug if we replaced "j *= 2" with "j <<= 1" in this sample code? I ask the latter question partly because nobody has yet proposed a portable fix for this bug. The patch proposed in <http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html> worked for Ralf, but it does not work in general. It attacks the problem by changing "int j" to "unsigned j". But because bigtime_test wants an int, this causes the test program to compute the equivalent of (int) ((unsigned int) INT_MAX + 1), and C99 says that if you cannot assume wrapping semantics this expression has undefined behavior in the common case where INT_MAX < UINT_MAX. Obviously this latter can be worked around too, but what a mess, huh?