Scott David Daniels wrote: > There has been a bit of discussion about a way of providing test cases > in a test suite that should work but don't. One of the rules has been > the test suite should be runnable and silent at every checkin. Recently > there was a checkin of a test that should work but doesn't. The > discussion got around to means of indicating such tests (because the > effort of creating a test should be captured) without disturbing the > development flow. > > The following code demonstrates a decorator that might be used to > aid this process. Any comments, additions, deletions?
Marking a unittest as "should fail" in the test suite seems just wrong to me, whatever the implementation details may be. If at all, I would apply a "I know these tests to fail, don't bother me with the messages for now" filter further down the chain, in the TestRunner maybe. Perhaps the code for platform-specific failures could be generalized? Peter -- http://mail.python.org/mailman/listinfo/python-list