It's not unheard of for module authors to complain that the automated test reports posted on testers.cpan.org FAIL modules that ought to PASS.

Tonight, I wish to make the opposite complaint: that one of my own modules garnered four PASSes when it should have FAILed!

Je m'accuse: Two days ago I uploaded v0.35 of ExtUtils::ModuleMaker. For the purpose of extending test coverage, I had included tests which depended on functionality from IO::Capture -- the relevant parts of which I included under t/testlib/. I got four positive test results.

Then, yesterday Scott Godin informed me by e-mail that EU::MM had failed on his box because IO::Capture was missing. (This failure was confirmed today by a report from imacat on testers.cpan.org.) I quickly realized that -- not for the first time -- I had failed to include files (in this case, IO::Capture and its children) in the MANIFEST, so they weren't bundled by 'make dist' and didn't make it up to CPAN. (I deliberately did *not* list IO::Capture as a prerequisite in Makefile.PL because I didn't want to force users to install that module. I simply wanted them to use it during testing and then throw it away.) I corrected that error and uploaded v0.36 of EU::MM last night.

The inference I drew was that the four false positives I received for v0.35 came from automated testing in an environment where IO::Capture was already installed, so that the test script did not need to find IO::Capture in t/testlib/. But I would consider such an environment to be "polluted" in the sense that it contained modules other than core modules that provided functionality to the distribution being tested.

Am I correct in this inference and this judgment? Or is there something about the automated testing that I don't understand?

jimk

Reply via email to