On Tue, 22 Jul 2003 02:37 am, Steve Loughran wrote:
>
> Agreed. Conor runs clover coverage tests every so often, and they are
> interesting.
>
I'll be doing another run soon and filling in the history gaps.
Conor
-
To unsubscri
Antoine Levy-Lambert wrote:
1. Do you use some kind of testing criterion? In other words, is there some
kind of rule
describing what should be tested? Examples of such a criterion could be:
"every line of code should be executed during testing" or "every method
should
have at least one test case."
Personally, having submitted one optional task, here are my testing
criteria:
1. Code coverage is nice, but it doesn't tell you much (though it is
a good metric for automation purposes). Instead, I evaluated the
requirements for the task, and constructed a set of test cases to cover
these
1. Do you use some kind of testing criterion? In other words, is there some
kind of rule
describing what should be tested? Examples of such a criterion could be:
"every line of code should be executed during testing" or "every method
should
have at least one test case." Often such a criterion is re
Also the unit tests are run each night using gump, this
also tests if other projects using ant can still be build.
http://jakarta.apache.org/gump/
There is also a large user base that is not shy in
reporting issues.
Peter
On Mon, 2003-07-21 at 12:46, Stefan Bodewig wrote:
> On Mon, 21 Jul 2003,
On Mon, 21 Jul 2003, Magiel Bruntink <[EMAIL PROTECTED]>
wrote:
> To validate theoretical results, I have used the Ant sources as the
> subject of a case study.
Then you already have our main point of testing, our unit tests (in
the src/testcases subdirectory).
> 1. Do you use some kind of testi