> >> Joe Buck wrote: > >> Here's a simple example. > >> > >> int blah(int); > >> > >> int func(int a, int b) { > >> if (b >= 0) { > >> int c = a + b; > >> int count = 0; > >> for (int i = a; i <= c; i++) > >> count++; > >> blah(count); > >> } > >> } > > > > Mark Mitchell wrote: > > I just didn't imagine that these kinds of opportunities came up very often. > > (Perhaps that's because I routinely write code that can't be compiled well, > > and so don't think about this situation. In particular, I often use unsigned > > types when the underlying quantity really is always non-negative, and I'm > > saddened to learn that doing that would result in inferior code.) > > However, it's not clear that an "optimization" which alters side effects > which have subsequent dependants is ever desirable (unless of course the > goal is to produce the same likely useless result as fast as some other > implementation may, but without any other redeeming benefits).
On Tue, Jun 28, 2005 at 09:32:53PM -0400, Paul Schlie wrote: > As the example clearly shows, by assuming that signed overflow traps, when > it may not, such an optimization actually alters the behavior of the code, There is no such assumption. Rather, we assume that overflow does not occur about what happens on overflow. Then, for the case where overflow does not occur, we get fast code. For many cases where overflow occurs with a 32-bit int, our optimized program behaves the same as if we had a wider int. In fact, the program will work as if we had 33-bit ints. Far from producing a useless result, the optimized program has consistent behavior over a broader range. To see this, consider what the program does with a=MAX_INT, b=MAX_INT-1. My optimized version, which always calls blah(b+1), which is what a 33-bit int machine would do. It does not trap. Since you made an incorrect analysis, you draw incorrect conclusions.