JDevlieghere added a comment. In https://reviews.llvm.org/D45215#1056043, @labath wrote:
> I don't think this is going in a good direction TBH. > > You are building another layer on top of everything, whereas I think we > should be cutting layers out. Besides the issues already pointed out (not > being able to differentiate PASS/XFAIL/SKIP, not all .py files being test > files), I see one more problem: a single file does not contain a single test > -- some of our test files have dozens of tests, and this would bunch them > together. I completely agree and removing the driver logic from dotest would contribute to that goal, no? > I think the solution here is to invent a new lit test format, instead of > trying to fit our very square tests into the round ShTest boxes. Of the > existing test formats, I think that actually the googletest format is closest > to what we need here, and that's because it supports the notion of a "test" > being different from a "file" -- gtest executables typically contain dozens > if not hundreds of test, and yet googletest format is able to recognize each > one individually. The only difference is that instead of running something > like "my_google_test_exec --gtest_list_all_tests" you would use some python > introspection to figure out the list of tests within a file. Great, I wasn't aware that there was a dedicated googletest format. If it's a better fit then we should definitely consider using something like that. > Besides this, having our own test format would allow us to resolve the other > problems of this approach as well: > > - since it's the test format who determines the result of the test, it would > be trivial to come up with some sort of a protocol (or reusing an existing > one) to notify lit of the full range of test results (pass, fail, xfail, > unsupported) > - the test format could know that a "test file" is everything ending in ".py" > **and** starting with Test (which is exactly the rules that we follow now), > so no special or new conventions would be needed. > - it would give us full isolation between individual test methods in a file, > while still having the convenience of being able to factor out common code > into utility functions If we come up with out own test format, would we be able to reuse the current dotest.py output? > I know this is a bit more up-front work, but I think this will result in a > much nicer final product, and will allow us to remove a lot more code more > easily (maybe even all of unittest2 eventually). That's totally warranted if it helps in the long term. > (I apologise for the rashness of my response, I can go into this in more > detail tomorrow). No worries, I didn't have that impression at all. I appreciate the constructive feedback! https://reviews.llvm.org/D45215 _______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits