We're using the Coverity Scan service[*]. We've put in some effort, and we've gotten some mileage out of it, but I feel we could get more.
Judging from the report e-mail I have lying about, we're scanning about once a month on average. These reports cuts off after 20 new defects. When there are more, which is common, people have to go to the web dashboard to see them. When I get one with ten, I may have a look, when I get one "Showing 20 of 100 defect(s)", I despair of the task, and put it off. I also use Coverity locally (requires a license) with a derived model for GLib to increase scanning power. Since last July, the number of defects I get that way has increased from ~400 to ~700. Not quite as bad as it sounds, because ~100 of the new ones are DEADCODE. Still, it suggests we haven't made much progress in reducing the number of defects to a manageable level. Some of the new defects are avoidable. For instance, we've added 16 MISSING_BREAK. Probably just missing /* fall through */, but we can't be sure without examining each case. Patch review fail. At the other end of the spectrum, I see 36 new UNINIT defects. I think we should scan much more regularly. Once a week, full auto? I further think we should send the e-mail report to the list, to have more eyes on it. Opinions? [*] https://scan.coverity.com/projects/378