Michael G Schwern wrote:
On Tue, Jul 19, 2005 at 10:49:12PM -0400, James E Keenan wrote:
The inference I drew was that the four false positives I received for
v0.35 came from automated testing in an environment where IO::Capture
was already installed, so that the test script did not need to find
IO::Capture in t/testlib/. But I would consider such an environment to
be "polluted" in the sense that it contained modules other than core
modules that provided functionality to the distribution being tested.
Am I correct in this inference and this judgment? Or is there something
about the automated testing that I don't understand?
If I understand correctly, the issue here is you failed to list a
dependency yet the tests failed to catch it because that module was already
installed. Its a false positive, but not testers.cpan.org's fault. It
would have to attempt to automatically figure out what modules your code
needs and check that you list them as dependencies. This is fraut with
peril.
Its your responsibility to check that you're listing all your deps. Use
something which runs through your source code, finds all the modules
you're using and checks that against your dependency list. I'm sure
someone here can recommend something on CPAN to do that.
This is the sort of reason that I've previously bitched about there
needing to be some sort of image-based testing methodology for.
The cpan testers do actually install a lot of modules, so their testing
platforms are unusually well populated with modules.
The image-based platforms would be able to implement a fresh
from-the-base-install install of each module. This would also greatly
help to focus attention on modules that many many others rely on as a
dep that don't install right.
That said, you would probably need some sort of new "dep install failed"
test result, because half the time it isn't YOUR fault that some dep
went from good to bad (like File::Remove on Tiger failing).
Adam K