New submission from Nick Coghlan:

Issue 18952 (fixed in http://hg.python.org/cpython/rev/23770d446c73) was 
another case where a test suite change resulted in tests not be executed as 
expected, but this wasn't initially noticed since it didn't *fail* the tests, 
it just silently skipped them.

We've had similar issues in the past, due to test name conflicts (so the second 
test shadowed the first), to old regrtest style test discovery missing a class 
name from the test list, and to incorrect skip conditions on platform specific 
tests.

Converting "unexpected skips" to a failure isn't enough, since these errors 
occur at a narrower scope than entire test modules.

I'm not sure on what *would* work, though. Perhaps collecting platform specific 
coverage stats for the test suite itself and looking for regressions?

----------
messages: 197218
nosy: ncoghlan
priority: normal
severity: normal
status: open
title: Find a way to detect regressions in test execution
type: enhancement

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue18968>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to