Antoine Levy-Lambert wrote:

1. Do you use some kind of testing criterion? In other words, is there some
kind of rule
describing what should be tested? Examples of such a criterion could be:
"every line of code should be executed during testing" or "every method
should
have at least one test case." Often such a criterion is referred to as code
coverage
criterion.

There is no real rule.

Testcases tend to be added to ant when problems have appeared, for instance
to demonstrate that a change fixes a bug report from Bugzilla. In some
instances, test cases are added proactively, to ensure that functionality
will be preserved after a change.

An interesting historical note is that some of the core of ant actually has minimal coverage simply because the code predated the tests. It is only later when somebody has gone to do an enhancement to the system that the tests were written. This leads to a process of


-write tests to specify and validate current behaviour
-change the code
-run the tests to verify new behaviour is compatible.

An example of this was the changes last year to property evaluation, the intent being to permit $ to be passed through except when used in ${property} strings. Before the change could be made, we had to write the junit tests for property expansion. Of course, such a fundamental thing had functional tests for a long time, but it is still amusing that we had no junit coverage.

New code only comes in with tests; effectively the Ant dev process has now evolved to be dependent upon junit tests and the big functional test, the Gump, This verifies every night that no changes to ant breaks the majority of the popular open source projects. Sometimes it does, but we try to avoid that, as we get notified whenever it is our fault :)

When Ant does break the gump, we usually get a fix out fast as it becomes a high-visibility issue. This means that by the time a release copy ships, it has been thoroughly tested by many projects.

The weakness with this approach is that projects that dont work like the gump -those with many levels of of <ant> tasks; ones which run for hours with forked processes, and under-IDE execution- sometimes exhibit problems that we don't catch. Example: slow memory leakage on the <exec> task. Nor, because the Gump run on Unix, do we test everything building on windows or other platforms as thoroughly as we'd like.


One of the problems with testcases is that a lot of tasks require specific
external resources (databases, application servers, version control systems,
SMTP mail server, ...) which do not always exist and do not exist under the
same name everywhere. These tasks are often very badly covered by testcases.

yes, anything that needs network resources has bad coverage; many of the obscure optional tasks (especially the SCM tools) dont get tests that well either. We've also been caught out by interop with third party tools, such as a change to <zip> that wasnt compatible with WinZIP...we were not automatically testing that winzip could handle it.


As a counterpoint, if you were to look at Apache Axis, the Soap stack, it has lots of interop tests, but a consequence is that you cannot run the tests off-line or behind a firewall, and often the nightly tests fail when some sever is missing.


The situation is not ideal, but the current test suite runs on my PC under Win 2000 in approximatively 5 minutes and is giving a hint whether the current version of ant is OK or not.

This is a key feature of the ant tests to value: they are fast.


2. Is the level of compliance to the testing criterion subject to
measurement? In case
of the first example, the percentage of lines executed during testing could
be measured.
Do you use a tool to automatically calculate your code coverage?

Since there is no real rule, there is also no measurement.

Agreed. Conor runs clover coverage tests every so often, and they are interesting.



--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to