On Jun 21, 2005, at 5:59 PM, Geoffrey Young wrote:



This seems unfortunate for at least two reasons:
1) it ends up taking a really long time to run the tests.  At some
point, maybe long enough that nightly tests become prohibitive (even
more so for continuous integration).

We have a substantial Perl code base (as I've said several hundred
modules), with unit tests.  I have a test environment which does a
nightly checkout of the code and runs all the unit tests, with
Devel::Cover enabled and reports on the results.

I have unit tests for maybe 15% of our perl codebase, but at least a basic compile test for maybe 90% of the almost 900 modules. a Devel::Cover run
takes ~14 hours to complete (versus maybe 2 hours without D::C) so I
abandoned the idea of nightly coverage runs a long time ago. not that I thought it would be that useful anyway - with that many modules, and so few
near 100%, I doubt I could make any sense of the report anyway.


Yes, we share a similar situation, I think. Overall the coverage is not great, but there is a require-all.t that makes sure everything compiles at least.

a better idea would probably be to run D::C for just the packages that
changed on a given day (via TEST_FILES), since a module nobody has touched in a month isn't going to get any better or worse coverage. intelligently diff that to the prior report for that module and then you've actually got
some useful information.


I hope that in the not-to-distant future, better modularization of the code base will help alleviate these problems to some extent.

However, it still seems to me that it would be nice to have coverage collection not have such bad algorithmic properties.


-kevin

Reply via email to