On Mon, Jul 5, 2010 at 8:31 PM, David Holland <dholland-sourcechan...@netbsd.org> wrote: > > The way some of the tests are organized suggests that the intended > model is one test program per test victim (or per test victim and > substantially different testing harness) and then one test case per > bug affecting that victim. > > Is that right?
Depends on what you understand by "victim" :-P I've seen this kind of separation in many testing frameworks. And it usually comes in handy. For unit tests: you have a test program for every source module. Such test program contains one unit test per function or one unit test per function/subcases (e.g. error paths, an input array with elements, an input array without elements, etc.) For integration/system/whatever tests: you have a test program per "semantic" unit (I guess that's what you call victims ;-) and then test cases for every particular scenario you want to represent. Say, for a command-line tool, you just have a single test program that feeds different options and argument values to it and checks results; every combination goes into a test case. For a web server, you'd have a test program for simple requests (and then provide all possible values of requests as test cases), another test program for ssl (with test cases for valid/invalid certificates), etc. For the VFS layer, the granularity of a test program per public function seems reasonable to me, with test cases for different input/output value combinations. Whatever is most suitable for a particular application; it's certainly up to the developer. *However*, the current practice of writing test programs with just one test case in them seems wrong to me -- but only if the test program carries a name that does not allow for further growth. Such approach provides no cohesion of test cases and it makes things harder to maintain (for all the reasons pooka mentioned regarding the addition of new test programs to the tree). -- Julio Merino