Mistakes by authors and/or referees and/or editors are fairly
commonplace.  Requiring the author to supply all software in
source code (e.g. all of Sage version XYZ) and requiring referees
to look at it for possible errors, and requiring editors/publishers
to make the source code available (archivally -- forever -- online?)
seems like overkill, generally.

If you go back a few years and look at what some people would
like to think of as the premier academic journal for "computer
algebra",
you will see poorly typeset (camera "unready") manuscripts written
in broken English.  In some cases the research presented is
duplicative, (i.e. previously published and more general), and
would have been rejected by a competent referee or editor.
In other cases the paper is fundamentally incomprehensible,
lacking context, rationale or motivation, or evidence of any
contribution.
These defects would not be affected by open source anything,
and are, in my view, far more serious.

(So as to not offend other journals, I'm talking specifically about
JSC - Journal of Symbolic Computation;  I have, from time to time
been on the editorial board, where I complained from the "inside" ...)

I do agree that there are abuses, e.g. a recent paper which compared
a proprietary program for doing X  in version 10 of Maple to a
proprietary program in version 13, showing the new version to be
much faster because of a new algorithm.  (new algorithm WAS
described..)

Independent implementation (by me) showed there was a far
superior (faster) and much simpler algorithm,  and that the
advertised speedup was undoubtedly due to quite other circumstances.

I supplied my source code, but this was a kind of "challenge" in which
people were presumably motivated to read each others' code. It
normally
would not happen that way unless you have a particularly
zealous (or rivalrous) referee.  Such a referee may ask for source
code, which
is different from requiring it to be open.

I think the your point about referees is generally correct; if you
find an author
you trust who has written a paper on a topic of interest, and
 who has exposed his/her work to public scrutiny, and has
made a good-faith effort to examine all comments and corrections and
has responded to on-point legitimate criticisms,  then that may be
useful.
The advent of science "blogs" perhaps is an implementation of some of
these ideas, though dealing with errors by going through a
chronological
log that includes the technical equivalent of spam, is not so good.



The imprimatur of a journal editor has substantially less clout these
days, and it is easier to find and reference and measure the influence
of papers even
if they are entirely unrefereed, e.g. ArXiv.org







On Nov 5, 8:24 am, john_perry_usm <john.pe...@usm.edu> wrote:
> > Does the lack of availability of source code for a program mean it is
> > unacceptable to publish the results of that program in a journal?  I
> > think not.
>
> I know of at least one recent case where claimed improvements in
> performance were due not to a new algorithm (as claimed by the
> authors) but to a more sophisticated programming style (despite the
> authors' claims to the contrary). Had the source code not been made
> available for both implementations, no one would have discovered this.
>
> Likewise, all of us make mistakes, and sometimes a paper documenting
> the algorithm is littered with errors (typographical and, sometimes,
> theoretical). In this case, having the source code available would
> promote understanding, since the code & the paper can be checked
> against each other.
>
> In the end, I guess it depends on how much you trust the referees. The
> more I get published, the less I trust the referees. Read that how you
> will ;-)
>
> regards
> john perry

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to