> On Jan 20, 2016, at 11:25 AM, Zachary Turner <ztur...@google.com> wrote: > > zturner added a comment. > > For example, what if someone adds a test that uses a very small amount of > functionality but tests the thing it's supposed to test. Do we block the > change and say "please make this test more complicated, we want to test as > much stuff as possible"? Of course not. It's ridiculous right? >
Indeed that is silly. But not much sillier than "don't add a step to that test because you aren't covering step explicitly in this test..." > It sounds like at least 2 people on this thread know of specific areas where > we're lacking test coverage. Why not spend a few days doing nothing but > improving test coverage? > Don't think I was claiming to KNOW where we are missing test coverage. I'm simply saying that in my past experience both with the lldb & gdb test suites we often catch bugs because of failures in a part of a test that was not essential to the test being run. That's because the debugger is not like something like a compiler, where you feed in some input and it proceeds regularly through a machine to spit some output at the end. The debugger keeps a lot more state over multiple operations, so you can get bugs that are because you did A then B rather than B then A, or A after B, etc. That's just the nature of the thing. So some amount of fuzziness is really helpful for catching these sort of interactions. Jim > > http://reviews.llvm.org/D16334 > > > _______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits