Apologies for lateness - stuff happened...
On 4/11/19 9:44 AM, Peter J. Holzer wrote:
On 2019-11-04 07:41:32 +1300, DL Neil via Python-list wrote:
On 3/11/19 6:30 AM, Bev In TX wrote:
On Nov 1, 2019, at 12:40 AM, DL Neil via Python-list
<python-list@python.org <mailto:python-list@python.org>> wrote:
Is the practice of TDD fundamentally, if not philosophically,
somewhat contrary to Python's EAFP approach?
Agreed: (in theory) TDD is independent of language or style. However, I'm
wondering if (in practice) it creates a mode of thinking that pushes one
into an EAFP way of thinking?
This is exactly the opposite of what you proposed in your first mail,
and I think it is closer to the truth:
It is a while ago, and I cannot honesty remember if I was attempting to
provoke discussion/debate/illustration by switching things around, or if
I simply confused the abbreviations.
TDD does in my opinion encourage EAFP thinking.
The TDD is usually:
1 Write a test
2 Write the minimal amount of code that makes the test pass
3 If you think you have covered the whole spec, stop, else repeat
from 1
This is often (e.g. in [1]) exaggerated for pedagogic and humoristic
reasons. For example, your first test for a sqrt function might be
assert(sqrt(4) == 2)
and then of course the minimal implementation is
def sqrt(x):
return 2
I have seen this sort of thing in spreadsheet training - someone
pulling-out a calculator, summing a column of numbers, and typing 'the
answer' in the "Total" cell (instead of using the Sigma button or @SUM() ).
However, I've never seen anyone attempt to pass-off this sort of code
outside of desperate (and likely, far too late) floundering during a 101
assignment - FAILED!
Who would begin to believe that such code implements sqrt, or that it
meets with the function's objectives as laid-out in the spec AND the
docstring? So, anyone can prove anything - if they leave reality/reason
far-enough behind.
Unfortunately the phrase "test pass" (as above) can be misused?abused in
this way; but nowhere in your or my descriptions of TDD did *we* feel it
necessary to point-out that the/any/every test should be true to the
spec. It's unnecessary.
Great joke, but its proponents are delaying a proper consideration of
TDD. Perhaps they are hiding behind something?
Which just means that we don't have enough test cases yet. But the point
is that a test suite can only check a finite (and usually rather small)
number of cases, while most interesting programs accept a very large (if
not really infinite) number of inputs, so the test suite will always be
incomplete. At some point you will have to decide thet the test suite is
good enough and ship the code - and hope that the customer will forgive
you if you have (inevitably) forgotten an important case.
Which highlights a difference between the description (above) and the
approach I described: I tend to write 'all' the tests first - which, now
that I type it, really does sound LBYL!
Why? Because testing is what we might call a diagnostic mode of thinking
- what is happening here and why? Well, nothing is actually happening in
terms of the application's code; but I'm trying to think-ahead and
imagine the cases where things might not go according to plan.
Later, I shift to (what?) code-authoring/creative mode (yes, someone's
going to point-out the apparent dichotomy in these words - please
go-ahead) and 'bend the computer to my will'. It is the tests which
(hopefully) prove the success or otherwise of my output. I'm not coding
to the test(s) - sometimes I deliberately take a meal-break or
overnight-break between writing tests and writing code.
Why not follow 'the letter of the law'? Because it seems to me that they
require different ways of looking (at the same problem). My fear of
doing them one-at-a-time is that I'll conclude (too early) exactly as
you say - that's enough testing, 'it works'!
Returning to my possible re-interpretation/misuse of the TDD paradigm:
I'm only officially 'testing' one test at a time, but if 'this iteration
of the code' passes two or more of the next tests in the series, well...
("oh what a good boy am I"!) In practice, I'll review this iteration's
test and its motivation(s) to ensure that the code hasn't passed 'by
accident' (or should that be, by "bi-product").
Now, reviewing the: write one (useful/meaningful) test, then write the
code until it passes (this and all previous tests), then re-factor
(tests and application code), rinse-and-repeat. The question is: might
the 'diagnostic' mode of thought 'infect' the 'creative' and thereby
encourage writing more "defensive code" than is EAFP/pythonic?
There is very little emphasis in TDD on verifying that the code is
correct - only that it passes the tests.
Relationship of both to spec, discussed above.
Agreed, that the spec must be held in the mind(s) of the test and code
writer(s)! Also agreed that code has an asymptotic relationship with
"100% tested".
I don't think any (serious) testing paradigm claims omniscience. The
question will always be 'what is enough?'. On the other hand, the number
of tests written/tested is no guide or guarantee, either. Some parts of
programming are "science" but some really are "art"!
hp
Hah, the same initials as Harry:-
[1] Harry J.W. Percival, Test-Driven Development with Python, O'Reilly,
2017
I've lost track of our friendly 'testing goat' since we both 'moved-on'.
You are right, I should take another look at the book (in fact, IIRC
there's been a 'new' edition). Thanks! --
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list