aaron.ballman added a comment. In D65912#1623293 <https://reviews.llvm.org/D65912#1623293>, @ZaMaZaN4iK wrote:
> > My point regarding statistics is that the check needs to pull its own > > weight -- if it doesn't find many true positives, it's not of much value to > > a broad community, or if it has a lot of false positives, we may need to > > tweak the check before releasing it to the public, etc. So definitely do > > the implementation work, but part of that work should be testing it over > > large code bases and reporting back the results. > > Okay. Do we have any infrastructure for doing such testing? Or I should do it > manually: prepare some large codebases, run over them clang-tidy with the > check and parse the result? We don't really have an automated way to do this (that I am aware of, anyway). I typically find large code bases that either use CMake directly or can use `bear` so that I can generate a compile_commands.json file, then I have a script that runs clang-tidy over all the compilation units in a compilation database. Repository: rG LLVM Github Monorepo CHANGES SINCE LAST ACTION https://reviews.llvm.org/D65912/new/ https://reviews.llvm.org/D65912 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits