Personally, having submitted one optional task, here are my testing
criteria:

        1. Code coverage is nice, but it doesn't tell you much (though it is
a good metric for automation purposes).  Instead, I evaluated the
requirements for the task, and constructed a set of test cases to cover
these requirements.  Then, I examined the source for exceptional conditions
that the implementation causes, and ensure that these are handled correctly
in my tests.
        After running through my tests, I generally do perform a coverage
analysis to see if there are any "glaring holes" (that is, any major
branches in the logic that were not covered by the tests).  This requires
human intelligence in the evaluation of each block: does an IOException
catch block need to be tested, or can it be?  How could a ThreadDeath
exception catch be simulated?  In this way, I use the code-coverage tool as
a tool to point me in directions of where I should more closely examine the
code's behavior.
        One thing that I should point out is that code coverage tools
actually work against the developer after a certain point.  True, they do
point out where code is not even being executed, which helps a great deal.
But in an object-oriented world, they fail to trace the object's state
through the code.  Path analysis can help, but I have not yet seen one that
can perform such an analysis across method calls.  So, developers tend to
consider the code tested if it is covered, which can be an enormous fallacy.
        2. Using an XDoclet approach, I can envision a code coverage system
that can weigh blocks of code for coverage analysis.  That is, critical
blocks that must be executed would be heavily weighed, whereas other blocks
(such as ThreadDeath catches) would be more lightly weighed.  This would
better reflect the style of coverage analysis that I perform.
        Yes, I do use a code coverage tool: the one I developed at
http://groboutils.sourceforge.net/codecoverage/index.html, which was
released under the MIT license.

-Matt

> -----Original Message-----
> From: Magiel Bruntink [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 21, 2003 6:16 AM
> To: [EMAIL PROTECTED]
> Subject: Testing of Ant
> 
> 
> Dear Ant developers,
> 
> I am currently working on a paper regarding the evaluation of 
> software testability.
> To validate theoretical results, I have used the Ant sources 
> as the subject of a case study.
> I would like to ask you to answer two questions about the way 
> Ant is tested.
> Your answers will help me provide context for the case study, 
> and improve its validity.
> 
> 1. Do you use some kind of testing criterion? In other words, 
> is there some kind of rule
> describing what should be tested? Examples of such a 
> criterion could be: 
> "every line of code should be executed during testing" or 
> "every method should
> have at least one test case." Often such a criterion is 
> referred to as code coverage
> criterion.
> 
> 2. Is the level of compliance to the testing criterion 
> subject to measurement? In case
> of the first example, the percentage of lines executed during 
> testing could be measured.
> Do you use a tool to automatically calculate your code coverage?
> 
> Thanks in advance for your time.
> 
> Yours,
> 
> Magiel Bruntink
> [EMAIL PROTECTED]
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to