On Mar 12, 2008, at 9:49 PM, Manuel López-Ibáñez wrote:
The clang front-end generates these warnings. This means that the set
of warnings produced by the compiler doesn't change as the optimizer
evolves, are generally less mystifying to the user, and have perfect
location info as a side effect. People who use -Werror tend to prefer
when they don't get new random warnings due to a compiler upgrade.
This approach is similar to what Java compilers do and frontends like
EDG do (afaik).

But then you don't have constant propagation and other optimisations
that remove fail positives, do you? Well, we have discussed already
doing all uninitialized warnings  before optimisation but users do
complain about false positives.

There is no right answer, and this topic has been the subject of much debate on the GCC list in the past. I really don't care to debate the merits of one approach vs the other with you, I just answered your question about what clang does.

Amusingly, GCC 4.0 emitted a false positive about this example despite using the techniques you discuss, and no GCC emits any of these warnings at -O0 (which is a very common mode used when doing a majority of development for some people).

The Clang project is also growing a static analysis engine which is
very adept at solving path sensitive versions of these problems, which
is useful for finding deeper bugs.


I guess that must take considerably more time and resources and be
redundant with some of the work of the optimisers. I guess we cannot
use such approach in GCC.

I have no opinion about the approach that you take in GCC. In practice, we have been able to do this analysis very quickly and get good results, and will continue to refine them as clang continues to mature.

I personally think that it is a major problem that GCC doesn't produce these diagnostics unless optimizations are enabled, and I continue to think that having diagnostics change depending on what optimization level is enabled is bad.

-Chris

Reply via email to