Folks, Klement suggested to bring it up on the list to discuss what is the best option as seen by the community.
It concerns the independence of tests within a single test/test_foo.py file. From the behavior of the test system I saw the tests being executed in lexicographic order pertaining to the function name, so naming them test_0000_blah, test_0010_bar, and calling the smaller functions from them gives a nice control over the gradual process of testing, and allows one test to build on the result of the preceding test(s), which for more complicated scenarios allows to reduce the time spent retesting the same functionality (being cognizant of increasing of the time it takes to do “make test”). Besides that, it keeps the log.txt less verbose and thus easier to navigate when troubleshooting why a given test fails, since there is no multiple repetition of the same sequence. When there are 20 or so testcases, it starts to be significant. On the other hand, this approach doesn’t allow the execution of an individual testcase from the test_foo file. In my experience, I would always rerun the make test for a given file until the first failing test was fixed before analyzing the next one - so it was never a problem. And it was also a stimulus to make all the tests as fast as possible so running all of them at once was the lowest overhead. To ensure the individual testcases are fully orthogonal to each other, one needs to create fewer but larger testcases (makes things harder to debug and the failure “fingerprints” less obvious), or a lot of repetitions of the activities same as simpler testcases, needed to prepare for the more complicated ones. Neither are helpful from debugging standpoint. So on one hand there is an “impure” but quite convenient approach for me as a developer, and on the other hand a more strict one, but with lesser visible (again, to me) value. One can argue it may be useful to reorder the tests during different executions, but I don’t see a practical value in it rather than contributing to a combinatorial explosion. There is a potential practical value in being able to execute multiple individual testcases in parallel or multiple instances of the same testcase to test scalability to some extent, assuming we ever have that option. (And arguably the scalability portion is already taken care by the CSIT tests so it seems like an overlap) Currently there is no specific policy on whether the tests within a single file must be orthogonal, so it would be useful to hear the opinions from the others’ use of the testcases. NB: this concerns only the tests within a single python test file. The different files are naturally independent (and that was one more reason for me why the tests within a single file should be possible to be made ordered and dependent). Please let me know what you think! Thanks a lot! --a _______________________________________________ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev