The one thing I don't really like very much about TDD is that in a loosely typed language I suspect if suffers.

Specifically...

- It can test the things you know that work.
- It is good when testing the things you know that don't work (its strong point)
- It is not good for testing the things you don't know that don't work.

Which sounds silly, but isn't. For example...

(oh god I didn't want to do this, I'm not picking on your dude, I'm so sorry I can't think of a better example...)

I look at the two modules that chromatic's that I have had a chance to interact with and they both suffer from the same problem.

When I write code I tend to keep my code simple and I I've developed other techniques for minimising bugs during the writing process, so I tend to work one class at a time. Write, document, test, repeat.

While the write/document bit is shifting to become much more reversed, or often mixed together, I still test last (per class) apart from a basic 01_compile.t test script.

Because I have my code and the spec (POD) I can then write the tests quite quickly for the things I know work. Generally I do a couple of sample black box tests to weed out obvious bugs, then do more thorough tests on specific methods.

But as part of this, whenever I'm testing a specific public method I tend to throw all sorts of junk at the public method to see how it responds.

Calling ->method( \"" ) or ->method( \undef ) or ->method ( sub { die "foo" } ) and a dozen other things like that is intentionally provoking that code into blowing up.

In particular, it forces you to examine your param-checking and similar things.

If your methods takes a reference to a string, can it handle a reference to a constant string.

If it takes a CODE ref, can it also handle an object with an overloaded code-context.

I don't know what you'd call this sort of thing, maybe Evil Testing?

It helps you to answer the question "what can possibly go wrong" and weeds out tons and tons of bugs in advance.

In contrast, as I hear chromatic express it, TDD largely involves writing tests in advance, running the tests, then writing the code.

But often it's not until you write the code that you have some idea of the sorts of things that might break it, and so you have a good idea of the sorts of evil you need to concentrate on.

In my use of Test::MockObject and UNIVERSAL::isa/can I found I was initially able to cause them to fail quite easily with fairly (to me) trivially evil cases that would occur in real life.

Now granted, chromatic has gone beyond the call of duty in making sure all the bugs I'm submitted get fixed, and I'm really really not trying to have a go at him, but given that both these modules have to deal with the "reality" of potentially twisted and evil things moving through their innards, they both initially did a fairly bad job (again, but my evil-aware standards) of dealing with edge-cases and such in the params, and various forms of evil things.

This I think (but cannot prove) is a TDD weakness, in that it might encourage not critically looking at the code after it's written to find obvious places to pound on it, because you already wrote the tests and they work, and it's very tempting to move on, release, and then wait for reported bugs, then add a test for that case, fix it, and release again.

I've submitted repeated reports of bugs that seem to occur all in a similar location near this API boundary area, and after the first one I would have though it was a clue to throw every possible bad type of thing you can think of at those points, before releasing again.

But I'd be really interested to hear what it was like on the OTHER side of my bug reports for those two modules, and hear the opposite side of the coin.

And again, sorry to drag you into this, and thanks a ton for Test::MockObject, which has saved my butt twice now in a tight corner (although the first time I had to wait for bugs to be fixed) :)

Adam K



Geoffrey Yong wrote:
hi all :)

for those interested in both php and perl, it seems that php's native .phpt
testing feature will soon produce TAP compliant output - see greg beaver's
comments here

  http://shiflett.org/archive/218#comments

so, TAP is slowly dominating the world... but we all knew that already :)

what actually prompted me to write is a comment embedded there:

"Only the simplest of designs benefits from pre-coded tests, unless you have
unlimited developer time."

needless to say I just don't believe this.  but as I try to broach the
test-driven development topic with folks I hear this lots - not just that
they don't have the time to use tdd, but that it doesn't work anyway for
most "real" applications (where their app is sufficiently "real" or "large"
or "complex" or whatever).

since I'm preaching to the choir here, and I'd rather not get dragged into a
"yes it does, no it doesn't" match, is there literature or something I can
point to that has sufficient basis in "real" applications?  I can't be the
only one dealing with this, so what do you guys do?

--Geoff

Reply via email to