On Mon, Dec 6, 2010 at 8:01 AM, David Kirkby <david.kir...@onetel.net> wrote:
> On 4 December 2010 05:32, William Stein <wst...@gmail.com> wrote:
>> On Thu, Dec 2, 2010 at 6:40 PM, David Kirkby <david.kir...@onetel.net> wrote:
>
>>> It's clear you have the ability to write decent tests, but I think its
>>> fair to say there are a lot of Sage developers who have less knowledge
>>> of this subject than you [=Bradshaw].
>>
>> True.  However, I think the general mathematical background of the
>> average Sage developer is fairly high.   If you look down the second
>> column of
>>   http://sagemath.org/development-map.html
>>
>> you'll see many have Ph.D.'s in mathematics, and most of those who
>> don't are currently getting Ph.D.'s in math.
>
> This presupposes that people of fairly high mathematical knowledge are
> good at writing software.

No, it's an observation that people of fairly high mathematical
knowledge are the ones actually writing software.

> I'm yet to be convinced that having a PhD in maths, or studying for
> one, makes you good at writing software tests. Unless those people
> have studied the different sort of testing techniques available -
> white box, black box, fuzz etc, then I fail to see how they can be in
> a good position to write the tests.

Because they understand what the code is trying to do, what results
should be expected, etc. If I told someone who was an expert in all
these (admittedly valuable) testing techniques to write some tests
that computed special values of L-functions of elliptic curves, how
would they do it? It's not like there's just a command in Mathematica
that can do this, and even if there were, who knows if they'd be able
to understand how to use it.

If I gave it to anyone with an understanding of elliptic curves,
they'd immediately pick a positive rank curve or two, and make sure
the value is very close to zero, then probably look up some special
values in the literature, etc. Or, say, the algorithm was to compute
heights of points. To someone without background, it would look like a
random function point -> floating point number, but to anyone in the
know they'd instantly write some tests to verify bi-linearity,
vanishing at torsion points, etc.

Of course, to achieve the ideal solution, you'd have someone with the
math and testing background and lots of time on their hands, or at
least have several different people with those skills involved.

> It's fairly clear in the past that the "Expected" result from a test
> is what someone happened  to get on their computer, and they did not
> appear to be aware that the same would not be true of other
> processors.

Most of the time that's due to floating point irregularities, and then
there's an even smaller percentage of the time that it's due to an
actual bug that didn't show up in the formerly-used environments. In
both of these cases the test, as written, wasn't (IMHO) wrong. Not
that there haven't been a couple of really bad cases where bad results
have been encoded into doctests, which is the fault of both the author
and referee, but I'm glad that these are rare enough to be quite
notable when discovered.

> Vladimir Bondarenko.has been very effective at finding bugs in
> commercial maths software by use of various testing techniques, yet I
> think I'm correct in saying Vladimir does not have a maths degree of
> any soft.

I agree, people of all backgrounds can make significant contributions.

>>> As such, I believe independent verification using other software is
>>> useful. Someone remarked earlier it is common in the commercial world
>>> to compare your results to that of competitive products.
>>
>> +1 -- it's definitely useful.   Everyone should use it when possible
>> in some ways.
>
> I'm still waiting to hear from Wolfram Research on the use of Wolfram
> Alpha for this. Personally I don't think there's anything in the terms
> of use of Wolfram Alpha stopping use of the software for this, but
> someone (I forget who), did question whether it is within the terms of
> use or not.
>
>> But consistency comparisons using all open source software when
>> possible are very useful indeed, since they are more maintainable
>> longterm.
>
> Yes.
>
> Especially if Wolfram Research thought it would hurt their revenue
> from Mathematica sales, they could very easy re-write the terms of use
> to disallow the use of Wolfram Alpha to check other software.

That would be a chilling statement indeed. "You're not allowed to
compare these results to those computed with open source software..."
Imagine the absurd consequences this would have on, e.g. results that
appear in publications.

- Robert

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to