> The other concern I've had with our style of xUnit testing is that we're testing > behavior, but not > the actual data. With Test::More, we tested against a copy of the live database > (when possible -- > but this definitely caused some issues) and we sometimes caught data problems that > xUnit style > testing seems less likely to catch. The reason for this is quite simple: when you > setup the > data, you're setting up your *idea* of what the data should be. This might not > match what your > data actually is.
I take the approach that these are fundamentally two different things. first, as a developer you need to code against what your idea of the data is, taking the "known data gives expected results" approach to your tests. a good example is a subroutine that uses a regex to parse the data - the best you can do while developing the routine is to make sure your regex handles the conditions of some sample data (which you may in fact be making up in your head). once that is done, you can bang against the routine with real data and see how it holds up. if you find that you get a condition that you didn't think about before, you now have two tests you can use - the real data that caused the error, and some minimal data extracted from the real data that isolates the problem which can be added to your developer tests. this is what I have been doing lately. call them whatever you like, as I'm sure that the XP people have some fancy nomenclature for it, but the idea is to separate developer-level (used while coding) from API-level tests (real-world API usage) and use both in your testing process. the former is what I use for coverage purposes, tracing through the logical branches in as isolated a context as possible. with the latter I try to tie in live (test) databases, servers, and so on, relying on that to fill in the gaps that are exposed in an isolated test environment. HTH --Geoff