>I don't mean to suggest this could be trivially done by anybody right
>now.  I'm talking about feasibility in the sense of several very hard
>weeks work by one of the top 10 Sage developers.
>
>...........
>
>1. Consider "lines of code". How many correct LOC/day does a top 10
>Sage developer write? On average. A guess.
>2. How many LOC are there in Maxima's code for integration? (easy to
>count)
>3. How many LOC are there in the code that Maxima's integration code
>USES? (not so easy to count -- includes
>pieces of many of the source files like simplification, solve,
>rational manipulation...  there is a chart about this, and
>you can also get dependency information from tools that are available
>in some lisp compilers. )
>4. How many LOC in LISP per LOC in whatever language you choose? 1,
>10, 1/10?
>
>Of course you might actually have to learn how to do this task by
>reading some pretty dense material; PhD dissertations by Bronstein,
>Trager, Davenport, Rothstein, Rioboo, Cherry, ... so that might take,
>oh, a few hours by one of the top 10 Sage developers ?

Integration in Axiom (and Fricas, which Sage now includes) uses a fair
amount of the algebra.  To get an idea of the complexity of the
problem take a look at the algebra graph for Axiom at:

http://axiom-developer.org/axiom-website/documentation.html#buildorder

under the Axiom build order link. Each node in the graph is a file of
algebra code (e.g. a python class kind of object) written in a very
high level algebra language (Spad). Edges represent only immediate
dependence from the prior level but each node actually has an
average of about 20 dependencies links so the graph should more likely
have 22000 edges. Maxima has a similar complexity.

I think there might be a bit of overconfidence in assuming that any 
one of the "top 10 Sage developers" is going to reproduce even a
fraction of that complexity in the near term. Axiom represents
approximately 300 man-years of research and development. Maxima
has a similar history.

My experience in both Lisp and Python indicates that the code size in
Python is likely to be a factor of 10 larger than the lisp/spad
implementation.  And my experience with Lisp compiled vs Lisp
bytecoded (as Python is) indicates that a Python implementation will
likely be a factor of 10 slower. Moving to Cython will likely regain
the speed at the cost of explicit knowledge of the internal
representation, making the system more fragile and less agile. Note
that we're ignoring the fact that the Maxima code has so many more
years of testing and field use than new python code would see.




What puzzles me is the cognitive dissonance between the Sage goals
of "not re-inventing the wheel" and "working to connect existing
systems" and the apparent rejection of the largest bodies of CAS
work in systems like lisp/Maxima. The code needed to cleverly 
connect Maxima to Sage is SO many orders-of-magnitude easier than
the code needed to reproduce Maxima functionality that I'm at a
complete loss to explain the rejection.

So the question I have is: Why not devote some of the top 10 Sage
developers to figure out a clever, more generic, more well-designed
API between Sage and Maxima? Even better would be a way to inline
Lisp code in Python and then you can execute Maxima code directly.

Since Sage doesn't use Axiom I really don't care what the outcome is.

Tim Daly




--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to