"Joseph S. Myers" <[EMAIL PROTECTED]> writes: > Encapsulate reliable overflow checks for the various > arithmetic operations and types in functions or macros in > gnulib (for long long multiply, in this case).
That might be the best approach in the long run, but it would take a lot of painstaking analysis work in GNU applications and libraries. I doubt we have time to do that right now. We do, however, have the time to take a simple approach that will work globally to preserve the traditional wrapped-overflow semantics of C. Compiling everything with -fwrapv is simple. It has optimization drawbacks, but if that's the best we can do now, then we'll probably do it. And once we do it, human nature suggests that we will generally not bother with the painstaking analysis needed to omit -fwrapv. If GCC had an option that let it compile overflow-checking code (like the example below) as the programmer intended, while still doing loop induction optimizations, then we'd no doubt use that option instead of -fwrapv. That sounds like a reasonable compromise. I would argue that such an option should be the default for -O2, as -O2 is so commonly used. > On Tue, 19 Dec 2006, Paul Eggert wrote: > >> What worries me is code like this (taken from GNU expr; the vars are >> long long int): >> >> val = l->u.i * r->u.i; >> if (! (l->u.i == 0 || r->u.i == 0 >> || ((val < 0) == ((l->u.i < 0) ^ (r->u.i < 0)) >> && val / l->u.i == r->u.i))) >> integer_overflow ('*'); >> >> This breaks if signed integer overflow has undefined behavior. > > Convert to unsigned and do the overflow tests using unsigned arithmetic. Sure, but that is trickier. In many cases code operates on types like time_t that are signed on some platforms and unsigned on others. It's easy for such code to test for overflow if you assume wraparound arithmetic, as code like { sum = a + b; if ((sum < a) != (b < 0)) overflow (); } is valid regardless of signedness. It's not so easy if you cannot assume wraparound arithmetic, particularly if performance is an issue (not the case in GNU expr, but it is true elsewhere). Also, such an approach assumes that unsigned long long int has at least as many bits as long long int. But this is an unportable assumption; C99 does not require this. We have run into hosts where the widest signed integer type has much greater range than the widest unsigned type. I hope these hosts go away in the long run, but they're in production use today. (The platform I'm thinking of is Tandem NSK/OSS.)