It seems to me that "doctests" are supposed to serve two purposes.
Since I have not looked at them (well, maybe one or two over the years?),
my comments may be naive or irrelevant, but here goes.

documentation has to be written by the programmer who writes the code,
preferably before the code is written, but certainly before the code is
inserted into a system.  It should include examples of how the code is
thought to be useful, written as complete examples if possible.  It should
include, in a separate section, the "edge" cases that are either the most
extreme but "still working" examples,  and maybe the just one-more than
that which fail to work,  but which an unwary user might think might work.

then there are the tests which are almost inevitably inadequate.  Many
programs (certainly most in Maxima) do not exhaustively test their inputs
for nonsense (e.g. non-terminating computations, ill-specified tasks)
because these conditions cannot be detected 100%, and sometimes it
is better to just let stuff go through and hope that some program further 
down
the line can make sense of it.  For example, one could check for strict
adherence to expectations, like "argument must be a polynomial in x"  but
would you notice that acos(n*cos(x))  is a polynomial?

But some checking is probably worthwhile, like checking for the expected
number of arguments. Though even there, someone might have ANOTHER
function in mind...

Anyway, a list of things that the programmer has thought of that will not
work and might be useful for someone to fix up later could be helpful.

A list of things that you might think the program will do but it won't
(features?)  e.g.  "This program does not work for complex numbers".

Of course in some cases writing the framework to do checking of all kinds 
of buggy input
can be daunting, and may even be expensive at runtime.

In the case of Sage, where important functionality is simply imported from
external packages after the fact, and the programmer may be long gone,
your options are limited.

Having a naive person come by after the fact and write tests can even be
counterproductive --- if it gives you a false sense of confidence that 
something works when the tests are quite meaningless.  As an example,
I recall someone testing/comparing Macsyma's polynomial factoring program
against some other program, and drawing conclusions about the speed of
the Berlekamp algorithm.  

In fact most of the tests were composed of "well-known" factoring special 
cases,
 most of which were specifically tested for in Macsyma, and for which the
algorithm was never invoked.

Also, if you insist on writing doctests for maxima functionality accessed 
through
Sage, you should probably document what maxima does.  Then you can
separately document what Sage does to prepare input and process output
from maxima.


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to