On Jul 26, 2013, at 7:12 AM, exar...@twistedmatrix.com wrote:

> To address this problem, I suggest you get into the habit of watching your 
> unit tests fail in the expected way before you make the necessary 
> implementation changes to make them pass.
> 
> This is only one of an unlimited number of ways your unit tests can be buggy. 
>  It might be tempting to try to fix the test runner to prevent you from ever 
> falling into this trap again - and who knows, it might even be a good idea.
> However, if you run your tests and see them fail in the way you expected them 
> to fail before you write the code that makes them pass, then you will be sure 
> to avoid the many, many, many *other* pitfalls that have nothing to do with 
> accidentally returning the wrong object.
> 
> This is just one of the attractions of test-driven development for me.

On a more serious note than our previous digression, perhaps *this* is the 
thing we should be modifying Trial to support.

The vast majority of Twisted committers do development this way - or at least 
aspire to, most of the time - but to someone new to automated testing, it's not 
entirely clear how you're supposed to use something like Trial, or how 
important it is that you see the tests fail first.

Perhaps if trial had a bit more of a memory of things that happened between 
test runs it would be useful.  For example, a mode where you could tell it what 
you're working on, and you could just re-run the same thing and you'd only get 
a 'success' when you went back and forth between red and green.

Here's a silly little narrative about how one might use such a thing:

$ tribulation begin myproject
Beginning a time of turmoil for python package 'myproject', in './myproject/'.
myproject.test_1
  Case1
    test_1 ...                                                             [OK]

-------------------------------------------------------------------------------
Ran 2 tests in 0.033s

PROCEED (successes=1) - All tests passing, an auspicious beginning. Now write a 
failing test.
$ tribulation continue
myproject.test_1
  Case1
    test_1 ...                                                             [OK]
myproject.test_2
  Case2
    test_2 ...                                                             [OK]

-------------------------------------------------------------------------------
Ran 2 tests in 0.033s

AGAIN (successes=2) - a test should have failed.
# oops, 'test_2' was just 'pass'... let me fix that
$ tribulation continue
$ tribulation begin myproject
Beginning a time of turmoil for python package 'myproject', in './myproject/'.
myproject.test_1
  Case1
    test_1 ...                                                             [OK]
myproject.test_2
  Case2
    test_2 ...                                                           [FAIL]

-------------------------------------------------------------------------------
Ran 2 tests in 0.450s

PROCEED (successes=2) - we are working on myproject.Case2.test_2 now.
$ tribulation continue
myproject.test_2
  Case2
    test_2 ...                                                           [FAIL]

-------------------------------------------------------------------------------
Ran 1 tests in 0.020s
AGAIN (successes=2) - you should have made the test pass.
$ tribulation continue
myproject.test_2
  Case2
    test_2 ...                                                             [OK]

-------------------------------------------------------------------------------
Ran 1 tests in 0.01s
PROCEED (successes=1) - myproject.Case2.test_2 works now, let's make sure 
nothing else broke.
$ tribulation continue
myproject.test_1
  Case1
    test_1 ...                                                             [OK]
myproject.test_2
  Case2
    test_2 ...                                                             [OK]

-------------------------------------------------------------------------------
Ran 2 tests in 0.033s
PROCEED (successes=2) - no regressions, find the next thing to work on
$ tribulation conclude
You have received one billion points, congratulations you have defeated 
software.

Does this seem like it might be a useful feature for someone to work on?  Not 
shown here is the part that when you do introduce a regression, it runs just 
the tests that failed until you fix all of them, then goes back up the suite 
until it reaches the top and you move on to the next thing...

-glyph

_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

Reply via email to