https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108432
--- Comment #2 from David Malcolm <dmalcolm at gcc dot gnu.org> --- (In reply to Segher Boessenkool from comment #1) > Many warning messages are also dependent on optimisation level. And the > actual generated code is as well ;-) > > -O0 means do the least possible work to generate correct code. There is > friction between that and having -fanalyzer do deep inspection of the code. > I think we should document -fanalyzer needs some optimisation enabled (does > it need -O2 in some cases, or just -O1 always, btw?) > > The suggestion to at least check the last loop iteration is good of course. Unfortunately, some analyzer warnings work better with optimization *disabled*. -fanalyzer runs much later than most other static analyzers. For example, -Wanalyzer-deref-before-check doesn't work well with optimization, as the dereference means that that optimized can remove the checks before the analyzer "sees" them. I think there's a natural tension between optimization and detecting undefined behavior, in that -fanalyzer wants to report on possible undefined behavior, whereas optimization wants to take advantage of undefined behavior.