Thanks a lot for your answer, Jeroen. On Wednesday, June 22, 2016 at 2:57:08 AM UTC-6, Jeroen Demeyer wrote: > > On 2016-06-21 23:06, stephan...@gmail.com <javascript:> wrote: > > 1) You want to keep the source code clean so doctests should be “short”. > But some test cases require more complicated code or have long output which > you would not like to add to the source code. > > For long or special doctests, you can put the tests in a separate module > containing only tests. We have some in src/sage/tests > > > 2) You don’t want to have certain tests in the documentation of a > function which would just distract the user. > > For this, you can use TESTS: blocks which do not appear in the > documentation. >
Good to know, I was not aware of that. > > > 3) Some things can’t or cannot easily be tested in doctests. > > > > Would like to see for instance are: > > a) performance tests where we would test against the previous release to > make sure that changes being introduced do not affect the performance in a > negative way > > That's really difficult to get right. But I'd love to hear good > suggestions. Keep in mind that timing an operation is the easy part, the > hard part is what to do with the timing results. > > There is an old ticket about this, but it never got anywhere. > See https://trac.sagemath.org/ticket/12720 > <https://www.google.com/url?q=https%3A%2F%2Ftrac.sagemath.org%2Fticket%2F12720&sa=D&sntz=1&usg=AFQjCNGurxqqSvtQLqj8h2iNyLAQYIthdw> > > I'll take a look. > > > b) randomized test, example: check for a number of randomly generated > number fields that arithmetic operations with randomly generated number > field elements gives the correct results. Randomized tests help to identify > issues that occur with input that no one thought about testing. > > This can be done with doctests. > > c) test mathematical correctness more extensively by storing results of > a larger set of results that have been verified to be mathematically > correct in some way > > This can be done with doctests. > I think there needs to be some standardized way to do so - in particular how and where to store the results. I'm thinking of computations that result in more output that just a single integer. For example, the q-expansion of a modular form with possibly large Fourier coefficients. > > > (These tests in particular could run very long of we want to cover large > ranges and we should not make these tests run by everyone but rather by > some bots in parallel and maybe also not block new releases if untested but > they could run continuously and block a new release if a problem is > discovered. The data to check against would be publicly available and we > could advertise that people install a bot on their machine that runs at > scheduled times and just picks some examples that have not been checked > with the current development version (or not on their particular > architecture or OS version or so.)) > > This sounds like overkill. It would introduce "yet another" testing > mechanism besides the patchbot and the buildbot that we have to maintain. > Maybe not, it could be an additional/optional module of the patchbot/builtbot, I guess. I just thought that it would make sense to not have all of such data tests run by all bots because there could be potentially many of such tests that run for a rather long time - but maybe that's already how the tests are distributed among the bots, which I know nothing about. > > > d) test unpickling of objects which seems to break rather often and is > not covered at all by any of the doctests > > This can be done with doctests (possibly using the pickle jar). > How would the pickles be stored/distributed? > > > Maybe not all of these tests would have to be run every time someone > submits a patch but should be run before a release comes out. > > I agree with this. We could add such tests if the release manager > agrees. However, this can be also be done with doctests (say, using an # > optional - release tag) > > > What do you think? > > I think that doctests are really just an interface. You can easily run > all kinds of tests using the doctester and it's nice to have a > consistent interface for testing. I think we should only introduce a new > mechanism if there is a clear need. > I totally agree that all of these things can _in principle_ be done using doctests, in particular using the TESTS block or the tests directory I wasn't aware of. Of course, you can write a test function and call it in a doctest - but it's a bit weird, I find. I also think that having doctests as the only interface might lead developers to not write more extensive tests, maybe because it seems restrictive and you don't want to mess up your code. If the consensus is to stick to this interface, maybe the developer documentation could give a few more pointers, e.g. to using TESTS and putting tests in the tests directory (or maybe it's already there and I only have to revisit it) and the reviewer checklist could be updated to encourage writing more tests. What I found in the documentation, which clearly discourages to do any of the things I wrote above is this[1]: "Even then, long doctests should ideally complete in 5 seconds or less. We know that you (the author) want to show off the capabilities of your code, but this is not the place to do so. Long-running tests will sooner or later hurt our ability to run the testsuite. Really, doctests should be as fast as possible while providing coverage for the code." [1]: http://doc.sagemath.org/html/en/developer/doctesting.html#optional-arguments Stephan > > > Jeroen. > -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.