On 2016-06-21 23:06, stephan.j.eh...@gmail.com wrote:
1) You want to keep the source code clean so doctests should be “short”. But some test cases require more complicated code or have long output which you would not like to add to the source code.
For long or special doctests, you can put the tests in a separate module containing only tests. We have some in src/sage/tests
2) You don’t want to have certain tests in the documentation of a function which would just distract the user.
For this, you can use TESTS: blocks which do not appear in the documentation.
3) Some things can’t or cannot easily be tested in doctests. Would like to see for instance are: a) performance tests where we would test against the previous release to make sure that changes being introduced do not affect the performance in a negative way
That's really difficult to get right. But I'd love to hear good suggestions. Keep in mind that timing an operation is the easy part, the hard part is what to do with the timing results.
There is an old ticket about this, but it never got anywhere. See https://trac.sagemath.org/ticket/12720
b) randomized test, example: check for a number of randomly generated number fields that arithmetic operations with randomly generated number field elements gives the correct results. Randomized tests help to identify issues that occur with input that no one thought about testing.
This can be done with doctests.
c) test mathematical correctness more extensively by storing results of a larger set of results that have been verified to be mathematically correct in some way
This can be done with doctests.
(These tests in particular could run very long of we want to cover large ranges and we should not make these tests run by everyone but rather by some bots in parallel and maybe also not block new releases if untested but they could run continuously and block a new release if a problem is discovered. The data to check against would be publicly available and we could advertise that people install a bot on their machine that runs at scheduled times and just picks some examples that have not been checked with the current development version (or not on their particular architecture or OS version or so.))
This sounds like overkill. It would introduce "yet another" testing mechanism besides the patchbot and the buildbot that we have to maintain.
d) test unpickling of objects which seems to break rather often and is not covered at all by any of the doctests
This can be done with doctests (possibly using the pickle jar).
Maybe not all of these tests would have to be run every time someone submits a patch but should be run before a release comes out.
I agree with this. We could add such tests if the release manager agrees. However, this can be also be done with doctests (say, using an # optional - release tag)
What do you think?
I think that doctests are really just an interface. You can easily run all kinds of tests using the doctester and it's nice to have a consistent interface for testing. I think we should only introduce a new mechanism if there is a clear need.
Jeroen. -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.