On 08/29/10 09:56 AM, Tim Daly wrote:
tl;dr old curmudgeon flaming on about the dead past, not "getting it"
about Sage.
Robert Bradshaw wrote:
In terms of the general rant, there are two points I'd like to make.
The first is that there's a distinction between the Sage library
itself and the many other spkgs we ship. By far the majority of your
complaints have been about various arcane spgks. Sage is a
distribution, and we do try to keep quality up, but it's important to
note that much of this software is not as directly under our control,
and just because something isn't as good as it could be from a
software engineering perspective doesn't mean that it won't be
extremely useful to many people. Even if it has bugs. We try to place
the bar high for getting an spkg in but blaming the Sage community for
poor coding practices in external code is a bit unfair. I hold the
Sage library itself to a much higher standard.
The point that "software is not as directly under our control" is not
really valid.
Agreed.
The statement that Sage tries "to place the bar high for getting an spkg
in" isn't
actually much of a claim. I've watched the way spkgs get voted onto the
island
and it usually involves a +1 by less than half a dozen people. Would you
really
consider this to be placing "the bar high"?
No, I don't think it places a high bar either.
It is probably seen as a high bar by those that do not have a software
engineering background. To those that do, I suspect they would conclude the same
as you and I.
Take a look at Sqlite's testing proceedures. The test code is 647 times larger
than the actual code for the database. I doubt that attention to detail would
have been very useful in Sage development.
One needs to find a sensible compromise.
I'd consider developing a
test suite,
or an API function-by-function code review, or a line-by-line code
review to
be placing the bar high.
Yes, though one does need to be practical about it. Those sorts of things are
essential in code for specific applications (medical, aeronautical), but are
probably not practical for Sage. I doubt anyone at Wolfram Research has ever
gone through every line of ATLAS code, but they use ATLAS.
At the moment I see Sage writing test cases for
python
code but I don't see the same test cases being pushed into the spkgs.
Even where
external test cases are available (e.g. the computer algebra test suites
for Schaums
and Kamke) I don't see them being run.
That is changing. I've gone through the packages and created a list of those
that are missing the spkg-check files that will allow the self tests to be run.
http://trac.sagemath.org/sage_trac/ticket/9281
The new Pari package will run the test suite if SAGE_CHECK is set to "yes". I've
personally sorted out a couple of packages recently and are just doing cliquer
now.
Robert agreed with me the other day that running short test suites from
spkg-install (i.e. every build) was reasonable.
The conclusion that "blaming the Sage community for poor coding practices
in external code" as being "a bit unfair" is not valid.
Agreed. The Sage community, in the most general sense, made decisions.
Still to come will be the "code rot" issue. Open source packages tend to
have a
very small number of active contributors. Projects tend to stop when
those people
drift away.
I think this can be avoided to some extent by not adding to the core Sage
library very specialised items that are only of use to a few people. Just
because person X developers some code during his PhD, no matter how useful that
may be to him, I don't think it needs to be a standard part of Sage if its only
going to be used by very few people.
Now that the wave of new spkg adoption has slowed I expect to see a growing
need for maintaining "upstream" code. By *design*, their problems are
now your
problems. Who will debug a problem that exists in 500,000 lines of
upstream code?
Who will understand the algorithms (e.g. sympow) written by experts,
some of
whom are unique in the world, and debug them?
How do you expect Wolfram Research, Maplesoft and similar deal with such issues?
They must hit them too. I suspect they have a few nightmares with this, but the
best way is probably to have decent documentation. If code is well commented,
and has references to papers where the algorithms are published, then it sill
probably be maintainable.
Writing new code is always fun. Maintaining old code you didn't write is
painful.
But from an end-user perspective "it is all Sage" so all bugs are "Sage
bugs".
That may seem unfair but the end-user won't know or care.
Exactly.
The belief that Sage will gradually rewrite the code pile it has (5
million lines?) into
higher quality seems odd.
As you say, it will not happen.
So at the time Sage was being developed there *were* standards in place.
You seem
to feel that Sage was started "pre-standard" (2005?) and "pre-referee"
(ISSAC?).
I see reviews of bug fixes but I don't see reviews of spkgs. We are now
over 50 years
into the development of computational mathematics and Sage has the goal
of competing
with systems developed in the 1970/1980s, over 30 years ago. This would
be a great
thing if Sage were to deeply document the algorithms, develop the
standards, and/or
prove the code correct but I don't see anyone advocating any of these. I
don't see anyone
advocating alternative ideas that would "raise the bar" in computational
mathematics.
Given what you said a few days back, that there were few institutions teaching
computational mathematics, would you agree with my point that getting more
computer science skilled developers into Sage is a step in the right direction
to raising the bar?
Architects design buildings. Builders build them. The architect and the builder
communicate and the result is decent building.
Tim Daly
(curmudgeon and rjf wannabe)
Dave
--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org