Nick Coghlan added the comment:

My day job these days is to work on the Beaker test system 
(http://beaker-project.org).

I realised today that it actually includes a direct parallel to Antoine's 
proposed subtest concept: whereas in unittest, each test currently has exactly 
one result, in Beaker a given task may have *multiple* results. The overall 
result of the task is then based on the individual results (so if any result is 
a failure, the overall test is a failure).

"tasks" are the smallest addressable unit in deciding what to run, but each 
task may report multiple results, allowing fine-grained reporting of what 
succeeded and what failed.

That means there's a part of Antoine's patch I disagree with: the change to 
eliminate the derived "overall" result attached to the aggregate test. I think 
Beaker's model, where there's a single result for the overall task (so you can 
ignore the individual results if you don't care), and the individual results 
are reported separately (if you do care), will make it easier to provide 
something that integrates cleanly with existing test runners.

The complexity involved in attempting to get expectedFailure() to behave as 
expected is also a strong indication that there are still problems with the way 
these results are being aggregated.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue16997>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to