Piers Cawley said: > Richard Nuttall <[EMAIL PROTECTED]> writes: >> In a previous life, I worked as part of a team (implementing Expert >> Systems in VAX Pascal actually), and we had one person whose sole aim >> in life was to design and build test cases. In many cases his complete >> lack of knowledge of implementation detail was good because he thought >> up all sorts of tests that were useful because he wasn't as close to >> the trees as we were, in other cases some of the tests didn't exercise >> any new functionality because in the itnernals, seemingly different >> cases were implmented using the same functionality. > > At the time. Those tests would still have value from the point of view > of ensuring that if the internals changed in such a way that the > 'different' cases really were different, then the tests would ensure > that they still had correct behaviour. And anyway, who *cares* if you > have > 100% test coverage, the goal is at least 100%, not exactly > 100%.
That depends to a certain extent on how many spare cycles you have to burn. With just the right number of tests we can get more smoke tests run. At the moment it takes most of the day to compile and test bleadperl on my zaurus. I'd be very pleased if it didn't in the future. And where is Nick Clark anyway? :-) But a decent code coverage tool should be able to tell you which tests are superfluous (from a coverage perspective) and the optimal order in which to run the tests such that the tests which increase the code coverage by the greatest amounts are run first. As an aside, I suspect that we'll still want to order the initial tests such that basic functionality is checked before it is used in more complicated tests. -- Paul Johnson - [EMAIL PROTECTED] http://www.pjcj.net