Here's my test-first TODO test management paradox:

If I write a failing test and share it through the central repo,
the smoke bot fails and keeps sending us e-mail until it is fixed,
which can be annoying when these are un-implemented features and not
bugs. The effect can be quit paying attention to the smoke bot. 

If I mark the test TODO, the smokebot succeeds and the test disappears
from the radar of tests that should be fixed soon. 

What's a good way to manage TODO tests so that they continue to be
noticed and worked on, but without being annoying? 

Partly I wish that the reporting tools provided more detail about TODO
tests. Rather than just telling me that X TODO tests passed, I'd like
to know exactly what they were and where they were located so I can go
work on them.

I also realize I have another class of TODO tests, it's the: 

Ill-get-to-it-eventually,-maybe-next-year class of TODO tests.

These are things that I've noted I'd like to have an automated test for, 
but the tests are long term because they are expensive, difficult to
setup, or well, I'm imperfect.

Maybe being able to add a "due date" to tests would help. :) 

The TODO tests would pass before the due date, but if they aren't 
addressed in flow of work, they start failing to bring attention to
themselves. 

And then there could be a "snooze" button too...

    Mark

Reply via email to