It seems to me like the time Devel::Cover takes to do its book-keeping
when a process terminates is linear in the total number of files in the
cover_db, rather than linear in the number of files involved in that
particular process.
This means that as your code base grows, the time to run unit tests
with coverage will increase quadratically with the number of files
(assuming the number of unit tests is also linear in the number of
modules).
A little supporting evidence, without and with coverage:
~10 modules
Files=15, Tests=71, 24 wallclock secs ( 1.45 cusr + 0.76 csys = 2.21
CPU)
Files=15, Tests=71, 73 wallclock secs (54.03 cusr + 3.36 csys = 57.39
CPU)
~500 modules
Files=171, Tests=9278, 216 wallclock secs (55.40 cusr + 16.78 csys =
72.18 CPU)
Files=171, Tests=9278, 3344 wallclock secs (2243.59 cusr + 164.73 csys
= 2408.32 CPU)
The cost in CPU seems fairly constant (about a factor of 30), but there
is a huge increase in the amount of non-CPU time, presumably mostly I/O
This seems unfortunate for at least two reasons:
1) it ends up taking a really long time to run the tests. At some
point, maybe long enough that nightly tests become prohibitive (even
more so for continuous integration).
2) Test of tricky IPC-type stuff are hard to make reliable because
processes that normally terminate in less than a second can take >10
seconds to terminate, throwing off heuristics like "wait N seconds
before concluding we're in a deadlock".
Is this conclusion about the scalability totally off or is this
accurate? Is there something inherent that requires this, or could it
be avoided?
Thanks,
Kevin