On 10/08/10 18:38:29, Basile Starynkevitch wrote:
> I am not an expert on these optimizations, but why would you want that? 
> The optimizations involved are indeed expensive (otherwise it would be
> -O1 not -O2), but once you asked for them, why only get warnings
> without the code generation improvement?

Because the optimizations also make the generated code more
difficult to debug, and can introduce new (buggy optimization) bugs.
I prefer to get the code working with -O0 and then verify that it
still works after optimization, because I think that minimizes
my development risk and maximizes my productivity.  Along those
lines, I would still like to have all the compile-time warnings
that I can get, and am willing to have my non-optimized builds
go a little slower (say, no more than 20% slower) to have the
additional warnings.

> However, I see a logic in needing -O2 to get some warnings.
> Optimizations are expensive, and they compute static properties of the
> source code, which are usable (& necessary and used) for additional
> warnings.

After hearing the pros/cons, I have come around to the point of view
that GCC's method of detecting things like uninitialized local variables
is part of its optimization architecture.  If I accept that my
development cycle is: ("first -O0, then full optimization"), then I
will have to accept that some warnings might show up when optimizations
are turned on.  Either that, or I might routinely run a tool like
PC-LINT, or Coverity during development, and this may minimize
the surprise warnings that pop up when optimizations are enabled.
Or as you suggested, always run two parallel builds: one optimized, and
one not.

I appreciate every one's ideas and suggestions.  This has
been an interesting discussion thread.

- Gary

Reply via email to