Disregarding the obvious " marketing fluff" of Wolfram's statement, it
seems to me plausible that some programs are more likely to be correct
(at least in well-understood basic components) than some theorems'
proofs.  Here's why.

A program can be run on test data.
A program can sometimes be analyzed to a 'higher level' abstraction,
namely a programming language, whose semantics are agreed upon and can
be used to synthesize "theorems" that can be used in proving programs.
That is, subroutines are susceptible to re-use as though they were
"axioms".

For example, assume I have a program M0 that can multiply 2
polynomials of degree 0.

 I can write a simple program M "by induction", to multiply two
polynomials A, B where the polynomial A is of higher degree , n>0 by
observing that if A=(a*x^n+ A2), then A*B is  (a*B)*x^n + A2*B, and A2
and B are of lower degree.
By hypothesis, M  (or M0) can be used for these two subproblems.

Testing such a program against known data gives a certain level of
confidence. If the program works for degrees 0,1,2  it is unlikely to
break at degree 203.  Though you could imagine that in some
programming language in some operating system on some hardware with
some limited amount of memory etc etc, something could go wrong go
wrong go wrong..



Compare this to a randomly-selected published proof.

1. The theorem may be correct, but the proof may be wrong, just
because -- who cares. Presumably there is a way of proving it
correctly, but if the theorem is more or less obvious, who cares.  If
there is a proof, it is probably incomplete because it would otherwise
be too tedious.  The theorem may also be wrong, but only slightly.  Or
may be just way off.  It might still get published.  It's happened.

2. Many theorems are observed closely by perhaps 4 people, usually no
more.

  The author, perhaps 2 referees, and maybe one other person. Perhaps
a graduate student.  I;m not saying this of all theorems, but a huge
percentage of them.

3. There is no way of mechanically "testing" a published proof the way
there is of testing a program for plausibility much less "coverage".



As for whether Mathematica would be more correct or less if the source
were open, is unclear to me.



But here is the important point, and why Wolfram's claim about "more
correct than proofs" is bogus.

If some randomly-selected published proof is wrong, there are
generally no consequences. Especially if the author has already
received his degree. If there is a bug in Mathematica, there may very
well be consequences. And we know there are bugs.

And so does Wolfram of course.  That is why he says,

" Wolfram Research, Inc. shall not be responsible under any
circumstances for providing information on or corrections to errors
and omissions discovered at any time in this document or the package
software it describes, whether or not they are aware of the errors or
omissions. Wolfram Research, Inc. does not recommend the use of the
software described in this document for applications in which errors
or omissions could threaten life, injury, or significant loss."

(check the "front matter" for any Wolfram package)





--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to