[I'm not sure why this thread is atll called "Calculus"!]

This is a basic mathematics/CS divide.  Mathematicians will expect
their vectors of length n to have indices 1..n and similarly for
matrices and so on.  The packages pari and magma use that convention
accordinly, since they are written for mathematicians to be as close
to mathematical notation as possible, and this is a great help to
getting mathematicians to do computations.

I think there's a real problem if we tell mathematicians tat to use
SAGE properly they have to both learn programming in a language they
have probably never heard of (sorry, but that is the case with pyhton
and mathematicians) and also re-learn habits of a lifetime.  There is
a very steep learning curve involved in learning a new package in any
case -- it took me years to get a "feel for" magma, and I still don't
have a good one for SAGE -- and it does not take a lot to put people
off.

Sorry if this sounds negative, but I have a feeling that sage-devel
has more CS people in it than mathematicians!

John

On 9/18/07, Joel B. Mohler <[EMAIL PROTECTED]> wrote:
>
> On Tuesday 18 September 2007 00:32, Nick Alexander wrote:
> > > Robert, Since you do so much work on Cython, maybe you could think
> > > about the formal specification of the Python language and see whether
> > >      ..
> > > not appearing in a string is ever valid Python.  I.e., could we add
> > >      [expr1 .. expr2]
> > > to the language without running into problems?
> >
> > Much like generators (K.<x>), this cannot be added to the preparser
> > without parsing arbitary python expressions (expr1 and expr2 in this
> > case).  At the moment, you can make the preparser barf and it would
> > be a great deal of work to fix.  Are we willing to do another
> > "correct 90% of the time" hack?  If this is considered very valuable,
> > I suggest we hijack a Python binary operator and repurpose it.  Or we
> > could uniformly preparse '..' to be that redefined operator; that
> > would be better.
>
> I don't have a strong opinion about the original proposition ... although I
> have spent the last week being thoroughly vexed by python being zero-based
> when all the things I would write on paper about math would index things
> one-based.
>
> However, I did want to say this about the preparser.  My impression is that it
> is written by doing very manual character/string reading.  I think
> the 'tokenize' module could probably make the code much much cleaner.  It has
> annoying (but understandable) semantics.  Here's an example below.
>
> --
> Joel
>
> import tokenize
>
> # helper class to generate a stream from a string
> class line_token_stream:
>     def __init__( self, s ):
>         self.line = s
>
>     def __call__( self ):
>         if self.line:
>             s = self.line
>             self.line = None
>             return s
>         raise StopIteration
>
> def tokenize_line( s ):
>     for t in  tokenize.generate_tokens( line_token_stream( s ) ):
>         yield t
>
> for i in tokenize_line( "func(1+2)" ):
>     print i
>
> >
>


-- 
John Cremona

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to