On Mar 14, 10:58 am, Volker Braun <vbraun.n...@gmail.com> wrote: > On Monday, March 14, 2011 6:13:54 AM UTC, Dr. David Kirkby wrote: > > > 1) A doctest should have a comment by it, referencing the trac ticket where > > the > > test was added. In other words, just simply "Trac #1234" if the test was > > added > > on ticket 1234. > > In the few cases where the doctest was a contentious issue or we previously > found some subtle problem with it, then its great to have a trac reference > where more details are recorded for posterity. But the majority of the > doctests do test rather fundamental functionality. Then I don't see the > point of littering trac reference all over the place just so that you don't > have to look into the mercurial log to find the original author and > reviewer. > > 2) Either the author or reviewer should justify the doctest > > > > Again, a lot of doctests are just checking that some object can be > constructed as intended. What are the author/reviewer supposed to justify? > The justification is implicit in the code. If, on the other hand, your > doctest proves the four color theorem then a reference would be in order. It > all depends on the specific doctest. > > Elementary tests of functionality or anything thats obvious to an expert in > the field shouldn't require justification. The remaining 10% or so should > have some explanation or be written in a way that the whole doctests > performs some consistency check. > > > . > > > Or do you believe its fine to have a test, which nobody can justify? > > > -- > > A: Because it messes up the order in which people normally read text. > > Q: Why is top-posting such a bad thing? > > A: Top-posting. > > Q: What is the most annoying thing in e-mail? > > > Dave
Doctests are attempting to fulfill two goals: 1) Demonstrate the purpose of the method/class etc. 2) Test that it works as expected. I think that the focus for them should be on satisfying 1). Satisfying 2) -- as David has often brought up -- is a hard problem and often requires a lot of code and intricate examples. Putting complicated tests within the source code clutters the code and confuses users trying to read the documentation. The doctests should be chosen so as to maximise understanding for the reader. However, I am very much with David on the notion of better and more thorough testing of the code; also on clarifying what has been tested, and why the tests are correct. But we shouldn't use doctests for this. I think we should have a separate library of tests, structured in modules just as the code, which can contain as many and as complicated and intricate tests as code authors and reviewers wish to write. Personally, I write many tests in parallel with my code, because I know I will want to make sure that my code works anyway; doing this in the notebook or terminal manually is a waste of time: as soon as I fix one bug, I have to manually redo all the tests again. So I write these in Sage scripts instead, which can easily be rerun. They could easily be uploaded with the patch as well. At least a few of the points given against David's suggestion would disappear with such a system, as all the detailed tests (basically almost anything not falling in #1) would be put in this separate testing library. Thus, it would not clutter up the source code or confuse users trying to learn this functionality in Sage. It would also make it more acceptable to write stronger (in the sense of testing) but less readable and pedagogic tests. I don't think we should require that such thorough tests should be written for every patch posted, but having a structured, uncluttered convention on where to put these tests and how to write them, would make coders and reviewers more succeptible to adding them (e.g. when 'manually' testing the code anyway). We could discuss details on this, such as whether or not to distribute these extra tests with the usual Sage bundle, whether or not to run the automatic tests with the patch bot, and whether or not to require they all still pass with new patches (as with doctests, thus imposing similar timing requirements as on doctests). I don't feel very strongly for any of these stand points. However, I would very much like to have a structured, agreed upon convention on where to put these "extra" tests. Cheers, Johan -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org