Hi Gilles,
thanks for your answer.

2012/8/30 Gilles Sadowski <gil...@harfang.homelinux.org>:
> Hello.
>
>> testing of special functions involves comparing actual values returned
>> by CM with expected values as computed with an arbitrary precision
>> software (I use Maxima [1] for this purpose). As I intend these tests
>> to be assesments of the overall accuracy of our implementations, the
>> number of test values is quite large.
>
> How large?
>
>> For the time being, I've inlined
>> the reference values in double[][] arrays, in the test classes.
>
> A priori, that's fine.
>
>> This
>> clutters the code, and I will move these reference values to resource
>> files.
>
> I'm not fond of this idea.
> I prefer unit test classes to be self-contained as much as possible.
>
>> In order to limit the size of these files, I'm considering binary
>> files, the obvious drawback being the lack of readability (for those
>> of us who haven't entered the Matrix yet).
>> So what I would propose to add a readme.txt file in the same resource
>> file directory, where the content of each binary file would be
>> detailed.
>> Would you object to that?
>
> Why do you want to test a very large number of values? Isn't it enough to
> select problematic cases (near boundaries, very small values, very large
> values, etc.).
>
That would make sense if we follow the path you sketch below.
To be the devil's advocate: there is one minor objection, though:
depending on the implementation, problematic values are not always the
same. But that is tractable (we incrementally add problematic values
as new implementations are written, never removing any of the values
previously considered as problematic).

> I'm not sure that unit tests should aim at testing all values exhaustively.
> That might be a side project, maybe to be included in the user guide (?).
>
I like this idea very much. In fact I would like to be able to provide
the user with a full report on accuracy, for specific ranges of the
argument (if relevant). Doing so in the user guide would give us the
opportunity to include graphs, which might help. The only objection I
would have is the following: surefire reports are generated
automatically, so that the impact on accuracy of any change in the
implementation gets reported immediately.

Maybe we could have a side project which gets automatically run as
part of the build cycle, and which produces human readable reports. I
like practical experience in this field, so II would welcome any
suggestion.

Sébastien


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to