On 2019-11-13 15:16:55 +1300, DL Neil via Python-list wrote: > On 4/11/19 9:44 AM, Peter J. Holzer wrote: > > TDD does in my opinion encourage EAFP thinking. > > > > The TDD is usually: > > > > 1 Write a test > > 2 Write the minimal amount of code that makes the test pass > > 3 If you think you have covered the whole spec, stop, else repeat > > from 1 > > > > This is often (e.g. in [1]) exaggerated for pedagogic and humoristic > > reasons. For example, your first test for a sqrt function might be > > assert(sqrt(4) == 2) > > and then of course the minimal implementation is > > def sqrt(x): > > return 2 > > I have seen this sort of thing in spreadsheet training - someone pulling-out > a calculator, summing a column of numbers, and typing 'the answer' in the > "Total" cell (instead of using the Sigma button or @SUM() ). > > However, I've never seen anyone attempt to pass-off this sort of code > outside of desperate (and likely, far too late) floundering during a 101 > assignment - FAILED! > > Who would begin to believe that such code implements sqrt, or that it meets > with the function's objectives as laid-out in the spec AND the docstring? > So, anyone can prove anything - if they leave reality/reason far-enough > behind.
I'm not a TDD expert, but my understanding is that this kind of thing is meant seriously. But of course it is not meant as a finished program. It is meant as a first step. And there is a reason for starting with an obviously incomplete solution: It makes you aware that your test suite is incomplete and your program is incomplete, and that you will have to improve both. If you write this simple test and then write a complete implementation of sqrt, there is a strong temptation to say "the code is complete, it looks correct, I have a test and 100 % code coverage; therefore I'm done". But of course you aren't - that one test case is woefully inadequate. As is demonstrated by writing a completely bogus implementation which passes the test. You say you write all the tests in advance (I read that as "try to write a reasonably complete test suite in advance"). That prevents the pitfall of writing only a few alibi tests. It also has the advantage that you are in a different mind set when writing tests than when writing code (almost as good as having someone else write the code). However, it means that you consider your tests to be complete when you start to write the code. So there is no feedback. If you forgot to include tests with non-integer results in your test suite (yes, I'm aware you wrote that quickly for a mailing-list posting and probably wouldn't make that mistake if you really wanted to implement sqrt), you probably won't think of it while writing the code, because now you are in the code-writing mindset, not the test-devising mindset. I think that tight feedback loop between writing a test and writing the *minimal* code which will pass the test has some value: You are constantly trying to outsmart yourself: When you are writing tests you try to cover a few more additional potential mistakes and when you are writing code you try to find loop-holes in your tests. > Great joke, but its proponents are delaying a proper consideration of TDD. I don't know what "proper" TDD is (and even less "proper consideration" of TDD), but TDD is in my opinion very much rooted in the agile mindset. And that means frequent iteration and improvement. So I think the micro-iteration technique is closer to philosophically pure TDD (if such a thing exists) than your waterfally "write complete spec, then write all tests, then write code" technique (That doesn't mean that your technique is bad - it's just not what I think people are talking about when they say "TDD"). hp -- _ | Peter J. Holzer | Story must make more sense than reality. |_|_) | | | | | h...@hjp.at | -- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!"
signature.asc
Description: PGP signature
-- https://mail.python.org/mailman/listinfo/python-list