Some general thoughts on core development and testing. For
bugs.python.org issues, the first Stage choice is 'test needed'. All
code patches *should* include new tests. This was not always so, and we
are still paying off technical debt. One problem I have encountered with
idlelib is that some refactoring is needed to make tests easier or even
possible to write, while refactoring should be preceded by good tests.
*Should* is not always enforced, but skipping them without doing
adequate manual tests can lead to new bugs, and adequate manual tests
are tedious and too easy to forget. I have learned this the hard way.
On 11/3/2019 3:44 PM, Peter J. Holzer wrote:
On 2019-11-04 07:41:32 +1300, DL Neil via Python-list wrote:
Agreed: (in theory) TDD is independent of language or style. However, I'm
wondering if (in practice) it creates a mode of thinking that pushes one
into an EAFP way of thinking?
This is exactly the opposite of what you proposed in your first mail,
and I think it is closer to the truth:
TDD does in my opinion encourage EAFP thinking.
As in "use the code and if it fails, add a test and fix it" versus "if
the code can be proven correct, use it".
The TDD is usually:
1 Write a test
2 Write the minimal amount of code that makes the test pass
3 If you think you have covered the whole spec, stop, else repeat
from 1
This is often (e.g. in [1]) exaggerated for pedagogic and humoristic
reasons. For example, your first test for a sqrt function might be
assert(sqrt(4) == 2)
and then of course the minimal implementation is
def sqrt(x):
return 2
The *is* exaggerated. For math functions, I usually start with a few
cases, not just 1, to require something more of an implementation. See
below.
Which just means that we don't have enough test cases yet. But the point
is that a test suite can only check a finite (and usually rather small)
number of cases, while most interesting programs accept a very large (if
not really infinite) number of inputs, so the test suite will always be
incomplete.
I usually try to also test with larger 'normal' values. When possible,
we could and I think should make more use of randomized testing. I got
this from reading about the Hypothesis module. See below for one
example. A similar example for multiplication might test
assertEqual(a*b + b, (a+1)*b)
where a and b are random ints.
At some point you will have to decide thet the test suite is
good enough and ship the code - and hope that the customer will forgive
you if you have (inevitably) forgotten an important case.
Possible test for math.sqrt.
from math import sqrt
from random import randint
import unittest
class SqrtTest(unittest.TestCase):
def test_small_counts(self):
for i in range(3):
with self.subTest(i=i):
self.assertEqual(sqrt(i*i), i)
def test_random_counts(self):
for i in range(100): # Number of subtests.
with self.subTest():
n = randint(0, 9999999)
self.assertEqual(sqrt(n*n), float(n))
def test_negative_int(self):
self.assertRaises(ValueError, sqrt, -1)
unittest.main()
There is very little emphasis in TDD on verifying that the code is
correct - only that it passes the tests.
For the kind of business (non-math) code that seems to be the stimulus
for TDD ideas, there often is no global definition of 'correct'.
--
Terry Jan Reedy
--
https://mail.python.org/mailman/listinfo/python-list