On 12-03-03 03:56 AM, Mario Frasca wrote:
On Fri, 02 Mar 2012 12:15:48 -0500
pgilbert902 at gmail.com (Paul Gilbert) wrote:

Mario

[...] Examples only need to run, but in tests/ you can
do things like

if( 2*2 != 4 ) stop("arithmatic is messed up.")


problem is: when you do a stop, you stop, meaning you do not run
subsequent tests.  the nice part of unit testing is that you have a
complete report of all failing parts, not just the first one.

When debugging myself I use make to run the tests, then make -k runs them all (and make -j -k runs them in parallel). Generally, I don't expect to have many errors by the time I look at R-forge results. Of course, not running all the tests is a pain if you have multiple failures and you are trying to debug for other platforms using R-forge.

but what you write, I would translate it into:

one script -the current one- to execute all tests (and it has to
succeed, in the sense it has to perform all tests and say it managed to
produce a report).

one script -which I did not yet write- to examine the complete
test report of the former one and inform `R CMD check` (the user)
whether anything went wrong in the unit tests.

WARNING would be bad enough if there are failing tests.

do you know how I can emit a NOTE from a test script?
I sometimes use disabled tests as a reminder of things that still have
to be done, but if I associate them to a WARNING or ERROR, r-forge will
not allow me to release the module.

You can use message() or cat() to put it in the R output, but I don't know how you would pass this back to R CMD check. If you use a make target to do this then you can grep the output for these messages.

You may be able to get similar results from RUnit or svUnit, it would
just be a question of passing the stop() and warnings() back to the
top level in the script file in tests/. If you don't do that, as you
observed, R CMD check thinks the unit testing worked fine, it did its
job and found the errors.

exactly.  this made me thing of splitting the task in two parts.
thanks, I will try that and come back here next week. (and document it
on the svUnit and stack overflow sites)

(Happy to hear additional points of view on this, my understanding of
RUnit and svUnit is limited.)

extensive unit testing lets me experiment broad internal changes.
the xml report of svUnit, combined with jenkins, automates the boring
part of the task. pity we still miss a coverage report.

I get most of this from the tests/ and make based system I've been using for many years, but the hard part is developing a comprehensive set of tests. Do the unit testing frameworks help with that?

Paul

Mario

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Reply via email to