Paul Eggert <[EMAIL PROTECTED]> writes: > Ralf Wildenhues suggested bugzilla originally, but Andrew Pinski > responded <http://gcc.gnu.org/ml/gcc/2006-12/msg00460.html> that the > problem "has been observed many, many times and talked about a lot of > time on this list" and implied strongly that the issue was settled and > was not going to change. And bugzilla entries complaining about the > issue (e.g., 18700, 26358, 26566, 27257, 28777) have been closed with > resolution INVALID and workaround "use -fwrapv". So it seemed to me > like it would have been a waste of everybody's time to open another > bugzilla entry; the recommended solution, apparently, was to use > -fwrapv. Hence the "Subject:" line of this thread.
Well, Andrew does not speak for the gcc community as a whole (and neither do I). Looking through your list of bugs: 18700: I believe this is correct default behaviour. 26358: I think this is questionable default behaviour. 26566: I think this is questionable default behaviour. 27257: I think this is correct default behaviour. 28777: I think this is questionable default behaviour. The common theme of these five cases is that I think that gcc should not by default use the fact that signed overflow is undefined to completely remove a loop termination test. At least, not without a warning. > > Historically we've turned on -fstrict-aliasing at -O2. I think it > > would take a very strong argument to handle signed overflow > > differently from strict aliasing. > > I take your point that it might be cleaner to establish a new GCC > option rather than overload -O2. That would be OK with me. So, for > example, we might add an option to GCC, "-failsafe" say, to disable > "unsafe" optimizations that may well cause trouble with > traditional/mainstream applications. We can then change Autoconf to > default to -O2 -failsafe. > > However, in thinking about it more, I suspect most application > developers would prefer the safer optimizations to be the default, and > would prefer enabling the riskier ones only with extra -f options. > Thus, perhaps it would be better to add an option "-frisky" to enable > these sorts of optimizations. I don't agree with this point. There is a substantial number of application developers who would prefer -failsafe. There is a substantial number who would prefer -frisky. We don't know which set is larger. We get a lot of bug reports about missed optimizations. Also, it does not make sense to me to lump together all potentially troublesome optimizations under a single name. They are not all the same. > I think in the long run the best results will come from a series of > changes, some to GCC, some to Autoconf, some to Gnulib, and some no > doubt elsewhere. I welcome adding warnings to GCC so that programmers > are made aware of the problems. If the warnings are reliable and do > not have too many false alarms, they will go a long way towards fixing > the problem. However, I doubt whether they will solve the problem all > by themselves. > > I have not installed the Autoconf patch (much less published a new > version of Autoconf with the patch) because I too would prefer a > better solution. But the bottom line is that many, many C > applications need a solution that errs on the side of reliability, not > one that errs on the side of speed. As far as I can tell the Autoconf > patch is so far the only proposal on the table with this essential > property. I don't really see how you move from the needs of "many, many C applications" to the autoconf patch. Many, many C applications do not use autoconf at all. I think I've already put another proposal on the table, but maybe I haven't described it properly: 1) Add an option like -Warnv to issue warnings about cases where gcc implements an optimization which relies on the fact that signed overflow is undefined. 2) Add an option like -fstrict-signed-overflow which controls those cases which appear to be risky. Turn on that option at -O2. It's important to realize that -Warnv will only issue a warning for an optimization which actually transforms the code. Every case where -Warnv will issue a warning is a case where -fwrapv will inhibit an optimization. Whether this will issue too many false positives is difficult to tell at this point. A false positive will take the form "this optimization is OK because I know that the values in question can not overflow". Ian