Let me preface this by saying that I know that our static analysis tests 
represent a tremendous amount of work by several people (especially Paul, 
with an enormous amount of respect to everyone who's contributed also to 
Perl::Critic and PPI), and that they have helped us reach and do help us 
maintain an important level of quality in our code.  Maintainability my the 
second most important goal for our code after correctness.

With that in mind, I wonder if it's time to reconsider our strategy for using 
these tests effectively.

I ran make test.  It took almost six minutes:

Files=307, Tests=7413, 345 wallclock secs (187.91 cusr + 34.26 csys = 222.17 
CPU

I renamed DEVELOPING to DEV and ran make test again.  It took under four 
minutes:
Files=296, Tests=7392, 220 wallclock secs (121.98 cusr + 26.64 csys = 148.62 
CPU)

The first run failed several coding standards tests, which suggests to me that 
people don't run them before every commit.  We can't prevent accidental 
forgetting, but I wonder if making the coding standards tests faster would 
make them less painful and make it more likely that people would run them 
more often.

Most of our commits touch fewer than a dozen files.  Are we getting enough 
benefit from performing static (non-functional) analysis of all several 
thousand files in our tree for every make test run that it's worth adding 
another 50% to our test run times?  (Not all assertions are equal in value, 
but cutting out 21 from 7400 drops the amount of time in a test run by a 
third.)

Again, our code has improved in no small part due to the tests and the 
diligence of all committers in running them and correcting minor accidents as 
they occur.  I only mention this to bring up the possibility of brainstorming 
alternate ways to use the tests to their full advantage.  If we're using them 
to their full potential now, that's fine.

-- c

Reply via email to