[sage-devel] Re: Sage 3.4 build problem on Mac OS X

2009-05-02 Thread Prabhu Ramachandran

mabshoff wrote:
>> I think this should be documented somewhere so others don't fall into
>> the same trap.  Thanks.
> 
> Cool. Thanks for telling us - I have made this #5961.

Glad to be of some assistance.  BTW, the .pydistutils.cfg will affect 
any new spkg installs also since distutils will always pick it up. 
There is a Python issue for this too:

  http://bugs.python.org/issue1180

So one option would be to backport the patch to the Python version you 
ship and always invoke setup.py such that it ignores the 
.pydistutils.cfg.  Of course, a simple test script that looks for the 
file and warns the user (like the macports warning/error) would also work.


cheers,
prabhu

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4 build problem on Mac OS X

2009-05-02 Thread mabshoff



On May 2, 1:25 am, Prabhu Ramachandran  wrote:
> mabshoff wrote:
> >> I think this should be documented somewhere so others don't fall into
> >> the same trap.  Thanks.

Hi,

> > Cool. Thanks for telling us - I have made this #5961.
>
> Glad to be of some assistance.  BTW, the .pydistutils.cfg will affect
> any new spkg installs also since distutils will always pick it up.
> There is a Python issue for this too:
>
>  http://bugs.python.org/issue1180

Thanks for the link - I should really get on the python list where all
the bugs are CCed :)

> So one option would be to backport the patch to the Python version you
> ship and always invoke setup.py such that it ignores the
> .pydistutils.cfg.  Of course, a simple test script that looks for the
> file and warns the user (like the macports warning/error) would also work.

Yeah, looking at the patch I would prefer to just warn the user and
tell him/her to move the file out of the way. The patches at that
ticket you quote all seem invasive and I don't want to touch every
python based spkg in Sage. Another thing is that if someone wants to
run

 ./sage -sh
 python setup.py install

manually it would still be broken. So sticking the test somewhere in
sage-env might be a good idea.

> cheers,
> prabhu

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4 build problem on Mac OS X

2009-05-02 Thread Prabhu Ramachandran

mabshoff wrote:
>> So one option would be to backport the patch to the Python version you
>> ship and always invoke setup.py such that it ignores the
>> .pydistutils.cfg.  Of course, a simple test script that looks for the
>> file and warns the user (like the macports warning/error) would also work.
> 
> Yeah, looking at the patch I would prefer to just warn the user and
> tell him/her to move the file out of the way. The patches at that
> ticket you quote all seem invasive and I don't want to touch every
> python based spkg in Sage. Another thing is that if someone wants to
> run
> 
>  ./sage -sh
>  python setup.py install
> 
> manually it would still be broken. So sticking the test somewhere in
> sage-env might be a good idea.

In that case this information may help:

http://docs.python.org/install/index.html#distutils-configuration-files

It looks like checking for .pydistutils.cfg (and the equivalent on 
windows) should cover it since the system distutils.cfg won't clash with 
the sage python.

cheers,
prabhu


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread Alex Ghitza

On Fri, May 1, 2009 at 5:13 PM, mabshoff  wrote:
>
> While going over the open tickets in 4.0 I noticed this ticket:
>
>   #5943 (Sage 3.4.2.a0: prime_pi(2^50) segfaults)
>
> If someone could take a stab at that it would be nice since that is
> brand new code and ought to be a little bit more stable than that. I
> am waiting on a fix for #5952 in the morning, i.e. Saturday, so some
> hero might want to earn some brownie points over night :)
>

Brownies are overrated.

I played around with prime_pi() for a while, both on sage.math and on
my laptop at the office (macbook running 32-bit archlinux).  I didn't
manage to get a segfault on either machine with prime_pi(2^50).  I
guess that's the good news?  Anyway, the bad news is that the answers
returned do not agree between the two machines, e.g.

{{{
# on sage.math
sage: time prime_pi(2^50)
CPU times: user 4854.46 s, sys: 3.17 s, total: 4857.63 s
Wall time: 4857.73 s
33483379603407
#
# on my laptop
sage: time prime_pi(2^50)
CPU times: user .60 s, sys: 7.81 s, total: 5563.40 s
Wall time: 5598.64 s
21969300962685
}}}

I'm pretty sure that the sage.math answer is more likely to be the
right one.  You can maybe guess from the timings why I didn't try
prime_pi(2^51).  I have, however, tried smaller values.  I'm going to
put that data up on the trac ticket.

I have absolutely no idea what's wrong, but something definitely is,
and hopefully this will help someone fix it.


Best,
Alex


-- 
Alex Ghitza -- Lecturer in Mathematics -- The University of Melbourne
-- Australia -- http://www.ms.unimelb.edu.au/~aghitza/

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage and symbolic software design

2009-05-02 Thread Jonathan

rjf,

On a number of your points I agree (see below).  However, I think
there is one significant problem with your point of view.  In my
roughly 30 years of experience as a chemist and professor using
software for:
1) computations
2) writing
3) database work
4) data acquisition
5) data analysis/reduction
6) presentations
7) teaching tools
8) dissemination (web, email, etc.)
I have found that most software written by pure computer programmers
is not as useful or error free as the software written by people who
actually do the kind of work the software is designed to do.

Some examples:

A number of times I have had programmers work in my laboratory on
interfacing computers to hardware to control the hardware or collect
data from it.  Every time, I've had to rewrite the code (sometimes
from scratch) to get it to work properly.  The first time I thought it
was a matter of not specifying things properly, but later it became
clear the problem was basic understanding of what the software really
was doing.

There are a number of software packages designed for analysis and
display of large amounts of experimental data.  One of the best
(blatant advocacy warning!) is IgorPro by Wavemetrics, Inc.  The key
here is that IgorPro was originally written by people who needed to
collect and analyze large amounts of data.  The code development is
still directed and primarily done by these people.  It is not an
opensource package, but all the algorithms are carefully documented
and traceable to published sources.  This way I can check and know
exactly what I am doing to my data.  So opensource is not the only
way.  The key in this case is that the software is good for data
analysis because the people who write it understand exactly how it
will be used.  Most other software of this type appears to be written
by people who only have a vague idea of how it will be used.

Microsoft Office is another example of how code written by people who
don't actually use the software for its intended purpose lead to poor
programs.  I know that at least some of the people MS hired to work on
their products are very bright and capable, but the software is
nothing short of "clunky".

The quantum chemistry packages we use in chemistry for modeling
molecular behavior are another example.  The code is almost
exclusively written by chemists and physicists.  Once again, the issue
is the difficulty of explaining sophisticated mathematical and
physical concepts to computer programmers who do not have the
background.  In general, we find it easier and more time efficient for
us to learn the computer science than the other way around.  This
means we do fruitfully collaborate with computer scientists, but
rarely do they get to do much programming for us.

> Maybe because most people are not too bright? :)
This comment is unnecessary.  LISP was designed for a specific purpose
and I do think that computer programming languages have improved since
then.

> I obviously view the phrase "viable ... alternative" differently.
>
> I don't know enough about Magma, but for suitable values of "viable"
> your job is done already.
> Maxima is a free open source alternative to Mathematica and Maple.
> So is Axiom.
> So is Jacal, Mathomatic, Reduce, 
> And for Matlab, there is Octave.
You have a very good point here.  The SAGE community should be very
careful to not duplicate effort unnecessarily.  However, my comments
above apply.  If the existing code does not integrate well into the
much easier to use paradigm of SAGE it will have to be rewritten.

> I don't understand "bus factor". But I disagree that more people
> understanding the code is necessarily better.
> It is an intellectual challenge to build a hierarchy of programs to do
> something significant. I would hope that, after a point, programmers
> will use the abstractions, interfaces, and code libraries of others
> effectively without looking at or understanding the internals.  For
> example, I used code for computing Voronoi diagrams recently. I have
> no interest in studying the particular code.  I suppose I might look
> at other code, sometime, but certainly not as a rule.
>
As an experimental scientists, I assure you that the more observers
the more likely issues (or bugs) will be recognized.  That said, using
code that already exists is a good idea as long as it really does what
you want.  A good example is my contribution to the Jmol package
(automatic generation of web pages with Jmol embedded).  I wrote
almost no user interface code.  I used the Java SWING package because
it did what I needed and saved me lots of coding time.  I only had to
worry about the generation of web pages that behaved the way we wanted
Jmol to work.

> Perhaps this is one difference between computer science and
> mathematics. For a mathematician, a proof is acceptable if it is
> somehow internalized or understood "down to the ground". A theoretical
> computer scientist dealing with proofs might act like a mathematician,

[sage-devel] Re: sage and symbolic software design

2009-05-02 Thread ahmet alper parker

"We never have time to do it right, but always have time to do it again."
from: http://www.netbeans.org/kb/60/uml/why-model.html
I do not support personally the idea of rewriting/replacing maxima but
the discussions remembered me the words above. This is why I mostly
spent my large percent of time to planning instead of coding.
Personally lisp is designed for symbolics, and it is more productive
to code symbolic software in lisp with respect to python, but I can,
to some extend, understand if people are hard at finding programmers
for lisp than python. So maybe, the long term education objective of
some computer/math courses/colleges should be motivating and educating
more lisp people.
Also, about the replacement of maxima, when William commented that
they can to some extend rewrite the code to compete to Mathematica,
Maple etc, I think, the users must be segmented and searched for their
preferences of needs on capabilities. Maybe someone only wants a
simple user interface to integrate, and maybe their numbers are much
more than others who can deal with much advanced tools in a command
line interface. Maybe some marketing research approach can illuminate
the way of the discussion's dİrections...
AAP

On Sat, May 2, 2009 at 4:15 PM, Jonathan  wrote:
>
> rjf,
>
> On a number of your points I agree (see below).  However, I think
> there is one significant problem with your point of view.  In my
> roughly 30 years of experience as a chemist and professor using
> software for:
> 1) computations
> 2) writing
> 3) database work
> 4) data acquisition
> 5) data analysis/reduction
> 6) presentations
> 7) teaching tools
> 8) dissemination (web, email, etc.)
> I have found that most software written by pure computer programmers
> is not as useful or error free as the software written by people who
> actually do the kind of work the software is designed to do.
>
> Some examples:
>
> A number of times I have had programmers work in my laboratory on
> interfacing computers to hardware to control the hardware or collect
> data from it.  Every time, I've had to rewrite the code (sometimes
> from scratch) to get it to work properly.  The first time I thought it
> was a matter of not specifying things properly, but later it became
> clear the problem was basic understanding of what the software really
> was doing.
>
> There are a number of software packages designed for analysis and
> display of large amounts of experimental data.  One of the best
> (blatant advocacy warning!) is IgorPro by Wavemetrics, Inc.  The key
> here is that IgorPro was originally written by people who needed to
> collect and analyze large amounts of data.  The code development is
> still directed and primarily done by these people.  It is not an
> opensource package, but all the algorithms are carefully documented
> and traceable to published sources.  This way I can check and know
> exactly what I am doing to my data.  So opensource is not the only
> way.  The key in this case is that the software is good for data
> analysis because the people who write it understand exactly how it
> will be used.  Most other software of this type appears to be written
> by people who only have a vague idea of how it will be used.
>
> Microsoft Office is another example of how code written by people who
> don't actually use the software for its intended purpose lead to poor
> programs.  I know that at least some of the people MS hired to work on
> their products are very bright and capable, but the software is
> nothing short of "clunky".
>
> The quantum chemistry packages we use in chemistry for modeling
> molecular behavior are another example.  The code is almost
> exclusively written by chemists and physicists.  Once again, the issue
> is the difficulty of explaining sophisticated mathematical and
> physical concepts to computer programmers who do not have the
> background.  In general, we find it easier and more time efficient for
> us to learn the computer science than the other way around.  This
> means we do fruitfully collaborate with computer scientists, but
> rarely do they get to do much programming for us.
>
>> Maybe because most people are not too bright? :)
> This comment is unnecessary.  LISP was designed for a specific purpose
> and I do think that computer programming languages have improved since
> then.
>
>> I obviously view the phrase "viable ... alternative" differently.
>>
>> I don't know enough about Magma, but for suitable values of "viable"
>> your job is done already.
>> Maxima is a free open source alternative to Mathematica and Maple.
>> So is Axiom.
>> So is Jacal, Mathomatic, Reduce, 
>> And for Matlab, there is Octave.
> You have a very good point here.  The SAGE community should be very
> careful to not duplicate effort unnecessarily.  However, my comments
> above apply.  If the existing code does not integrate well into the
> much easier to use paradigm of SAGE it will have to be rewritten.
>
>> I don't understand "bus factor".

[sage-devel] Re: sage and symbolic software design

2009-05-02 Thread rjf



On May 2, 6:15 am, Jonathan  wrote:
 snip
[How programs written by application specialists in your area and in
others have been more useful than programs written by others not
familiar with the application area]

Sure, this is true.  It is certainly true of computer algebra systems
where (for example) relative large amounts of effort are devoted to
parts of systems which are pretty much doomed to be of almost no use
except demonstrations. Simple example: almost no one other than
freshman calculus students are interested in doing symbolic indefinite
integrals, and even they are not interested in the decision procedure
for integration in finite terms in terms of elementary functions. The
applications are for definite integrals, which have for many many
years been computed adequately (in most cases) by numeric routines.
These numeric routines have been around in CAS, but distinctly as
second-class citizens in most of them, for most of the time.  The PhD
dissertations were on topics of interest to people studying "decision
problems".
Not to say they were "wrong", just off the mark for applications.


> As an experimental scientists, I assure you that the more observers
> the more likely issues (or bugs) will be recognized.
...
If the choice is between discovering new algorithms or being the
1000th person to re-read an old one, I think the more productive
person would try to strike out in new territory.
If you are looking for a PhD, you certainly look for novelty.



>
> This comes back to an issue of documentation.  Most commercial
> computer programs have poor to non-existent documentation on what they
> actually do (IGOR, see above, is an exception).

I think that Mathematica has extensive documentation, once you
understand that you won't ordinarily see the inside of the code.
Whether "extensive" means "easy to use" or not is a matter of opinion.

> As a scientist, I
> need to be able to check exactly what the code I use does.  This means
> that I have very limited choices on software I can use for my work.
> Open source solves this problem by leaving the code visible, so it can
> be checked if I need to.

I think this is doubtful.  Maxima is open source, and yet William
Stein not only refuses to look at it, he seems throw his hands up in
exasperation if it doesn't work.  He seems to think that it would be
easy to read if it were written in Python instead of Lisp, but this is
just wishful thinking.  In years past, people would write "flow
charts" or use other mechanisms etc. It didn't work either.

...
> I echo, William:  Software sucks.  Thus we must be able to continually
> improve it.

no, I think that software can, sometimes, just be correct. Parts of
Maxima were correct in 1968, and have not been changed since. These
are, in some senses,  the parts that are continually reimplemented by
people writing new computer algebra systems.  The parts that are
easily characterized formally.  Polynomial arithmetic for example.

 Not to say that these parts are entirely uninteresting -- there are
current research activities dealing with optimizing non-trivial
algorithms for cases where polynomials have astronomically many terms,
in which case the old methods can sometimes be improved upon.


> In most cases open source has worked well for this.  Open
> source is essentially a fast version of scientific peer-review/
> publication and the way our arts and literature build on the work of
> previous people.

I have not counted recently, but there were, I think, something like
12,000 open-source text editors for MS-DOS.
I think that the evidence is quite substantial that the open source
movement, statistically, does not consist of people building on the
work of previous people, but deciding to duplicate, perhaps with minor
variation, the work of previous people, or explicitly making forks of
projects.  When William Stein talks about social issues, this is a
recognition of reality.

There are some notable exceptions you might think, e.g. Linux comes to
mind.  But do you know how many different free versions of Un*x there
really are now, or have been in the past, and how much intellectual
effort has been squandered  (or perhaps devoted to following the pure
rather than the polluted track?)

...
>
>
>
> > > It seems like you view Sage as a "waste of effort", since it is new.
>
> > (RJF)  Well, if you exactly replaced  Maxima by writing it over in Python,
> > Yes. It would be a waste of effort.
>
> This rewrite definitely has potential to be wasted effort.  William
> has provided some reasons why it might not be.

I guess I do not recall a single reason I agree with. In fact the only
that come to mind...

(a) writing it in python would eliminate lisp, a language that
students and probably faculty don't know.
(b) writing it in python would make it faster, (highly unlikely, by
the way).
(c) writing it in python would make the (apparently inefficient)
interface between python and lisp disappear.
 (a case of shoot

[sage-devel] Re: NSF futures in comp alg conference: some comments

2009-05-02 Thread kcrisman


Great report!

David and I are now at the East Coast Computer Algebra Day which
followed his conference, and David Bailey is speaking about PSLQ.  Any
sense on the status of http://trac.sagemath.org/sage_trac/ticket/853
?  It seems like this is one of the few things Maple etc. have that
Sage does not from this standpoint - or has a different implementation
since been added?  Also, there are some licensing issues probably,
just from perusing the relevant websites mentioned on that ticket.

- kcrisman
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage and symbolic software design

2009-05-02 Thread Marshall Hampton



On May 2, 10:06 am, rjf  wrote:
perhaps I've missed something?

Well, you've missed so much that you clearly just enjoyed writing a
flame.

I use Sage for a number of research purposes for which it is the only
system that integrates all the things I need (for example, gfan).  For
me, speeding up basic symbolic operations by moving to python/cython
makes total sense - there is no doubt in my mind that adding what I
need to Maxima would take a lot longer.  As you yourself pointed out,
doing things like symbolic indefinite integrals isn't that important
for many purposes, including mine.

It always seems that you are unwilling to accept that there are a
number of areas in which the python/C ecosystem is in a much healthier
state than Lisp.  Bioinformatics is a good example; in bioinformatics
perl is much more popular than python but I think python was a good
choice for sage given the very wide range of applications we are
aiming for.

M.Hampton
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Wolfram Alpha and Google (Trendalyzer)

2009-05-02 Thread Robert Dodier

mark mcclure wrote:

> There's a lovely little article in the February 2009 issue
> of the monthly on using integrals to approximate pi.  The
> author "discovers" some nice rational approximations of pi
> by systmeatically searching through integrals of the
> form
>
> integrate(
> (x^m * (1 - x)^n * (a + b*x + c*x^2))/(1 + x^2),
>   x, 0, 1)
>
> with Maple.  Unfortunately, Maxima (and therefore Sage)
> cannot do these integrals.

Maybe a different example is needed; Maxima can now
compute such integrals. (The code was in CVS at the time
and now it's been released.) Maxima's symbolic integration
has been greatly strengthened by recent work of Dieter Kaiser
and Raymond Toy.

FWIW

Robert Dodier

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage and symbolic software design

2009-05-02 Thread Maurizio

Hello

> Sure, this is true.  It is certainly true of computer algebra systems
> where (for example) relative large amounts of effort are devoted to
> parts of systems which are pretty much doomed to be of almost no use
> except demonstrations. Simple example: almost no one other than
> freshman calculus students are interested in doing symbolic indefinite
> integrals, and even they are not interested in the decision procedure
> for integration in finite terms in terms of elementary functions. The
> applications are for definite integrals, which have for many many

I tend to disagree on this. In my opinion, you tend to consider data
analysis the only "application", but design and synthesis are another
very important area. There, data crunching is far less important, and
symbolic manipulations are much more important, to understand the
design trade offs. So, for my everyday work (as an engineer, so very
application oriented) I consider symbolic management (look in this
group how often I do ask for symbolic indefinite integration, symbolic
laplace/fourier transforms, etc) at least as important as numerical
data analysis.

>  There are quite a few people now improving Maxima without rewriting
> it in python.  There is material in the open literature that suggests
> a total redesign might cure some problems. I've written some of it.
> So far as I know, the Sage people are not aware of these issues.  They
> might even be in the parts that are not needed by current users and so
> would be left out, much to the detriment of future directions of
> growth.
>
> A reason for (doing something like rewriting Maxima in python) might
> be what you suggest below, sort of.
>
> (e) A complete redesign of a computer algebra system with facilities
> like those in Maxima, from top to bottom is long overdue. We propose
> to do this.  We want to use Python because, uh, because we think uh,
> high school students know Python?  And Lisp is bad for computer
> algebra. See how hard it was to use in Maxima, Reduce, Jacal,
> Axiom, ...
> [Maple and Mathematica are, I think, written in C extended in some
> ways].
>

I would like to kindly ask you to point out the reasons for which
Maxima would take advantage from a complete redesign, so that these
points can be already taken as the starting points of the new CAS. I'm
sure you can give a lot of good (potentially constructive) advices,
and I'm sure people here are really willing to listen for them.

Regards

Maurizio
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage and symbolic software design

2009-05-02 Thread Nick Alexander

To everyone participating in this thread:

PLEASE LET IT GO.

This is a list discussing the development of sage, both technical and  
social aspect.  Is this thread helping?  Is this thread significantly  
different from previous incarnations?  Have any of those threads  
helped the sage project?

I have stopped myself from writing this message twice, but three times  
is the charm for me.  PLEASE LET THIS GO.

Nick

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Functional derivative in Sage using pynac

2009-05-02 Thread Golam Mortuza Hossain

Hi,

On Sun, Apr 26, 2009 at 7:43 PM, Tim Lahey  wrote:
> I do it from a mathematical perspective. The code to do the variation
> itself is

Thanks Tim.  I played around further with current Sage to see what
stuffs need to be improved in implementing functional derivative in Sage.
I followed the mathematical definition that you did in maple. In Physics,
one takes the test-function to be Dirac delta (that's the only difference).

I tried with the following simple implementation
-
def fdiff(L, q):
"""
Functional Derivative
"""
var('epsilon');
f(t) = function('dirac_delta',t)  # Test function

return diff(L.substitute(q=q+epsilon*f(t)), epsilon).substitute(epsilon=0)

# Example usage:

# Time and Position
var('t');  q(t) = function('q',t)

# Lagrangian for Simple Pendulum
L = (diff(q(t),t))^2/2 - q(t)^2/2

# Action
S = integrate(L, t)

# Euler-Lagrange equation directly follows from variation of action
fdiff(S,q)
--

In fact, above 'works' in current sage but only after misusing
current "substitute" method.

To implement proper functional derivative in current Sage,
following stuffs need some work

(1) Improved "substitute()":

Currently, "substitute" works, if keyword is a symbolic variable.
It doesn't seem to work when keyword is a symbolic function.  Does
anyone know whether there are other ways in Sage to substitute
a symbolic function by a symbolic expression?


(2) dirac_delta(x):

I am planning to use Sage to compute Poisson brackets where
I need functional derivative (with Dirac delta as test-function).
Maxima seems to have implemented Dirac delta (for Laplace
transform). Does anyone working in implementing Dirac delta
in Sage? (or in pynac?)

I am planning to start working on the above two. However, I
will prefer to avoid any effort duplication.


Thanks,
Golam

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Wolfram Alpha and Google (Trendalyzer)

2009-05-02 Thread mark mcclure

On May 2, 1:02 pm, Robert Dodier  wrote:
> Maybe a different example is needed; Maxima can now
> compute such integrals.

Thanks Robert,

I did see on the Maxima discussion list back on February 20
that CVS Maxima could do these integrals.  However, I checked
Maxima 5.18.1 on my Mac laptop and the following returns
unevaluated:
integrate((x^m * (1 - x)^n * (a + b*x + c*x^2))/(1 + x^2), x,0,1);

Am I doing something wrong?  As I recall, version 5.17.1 asked
questions about the parameters.

I've been a fan of Maxima for years and have long recommended
it to folks who have wanted a computer algebra system but didn't
want to pay for Maple or Mathematica.  This is the second time
I've posted a problem integral to this group only to have you
report that it's fixed (or being fixed).  Thanks a lot for your work
on Maxima.

Of course, there will always be problems that one tool can do
that another cannot.  I, for one, will happily continue to use all
types of mathematical software.

Mark

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Functional derivative in Sage using pynac

2009-05-02 Thread Maurizio

Hi,

I had previously been interested in implementing delta dirac function
in pynac, especially for Laplace transform, so maybe I can give you a
couple of references.

First of all, the (quite long) thread about this in the sage-devel
group:

http://groups.google.com/group/sage-devel/browse_frm/thread/91289956e4db80c6/368c6c89935b85ad?lnk=gst&q=delta+dirac+pynac#368c6c89935b85ad

In the first response, Burcin had given a link to his (very basic)
draft of the dirac_delta function that he sent me a couple of weeks
ago.

Moreover, I think another interesting thread is the one on the SymPy
bugtracker:

http://code.google.com/p/sympy/issues/detail?id=672

At the end of this thread, they have the DiracDelta implemented, so
you can actually read some python code implementing dirac delta,
although based on SymPy.

I can't give references about Maxima, because I'm not able to
understand LISP, so that would have been useless to me.

My final comment is that, I have understood that I should better wait
for SAGE 4.0 to come back with these kind of issues, because that
would be the first release (if I got it right) with Pynac enabled
(maybe by default?), and so after that we can start working on
building something on top of it. I am not even sure if something in
Pynac engine is not going under some rewriting or development. But I
think that 4.0 is coming soon!

I would be glad to hear your impressions and to be updated on your
work :)

Regards

Maurizio

On 2 Mag, 20:53, Golam Mortuza Hossain  wrote:
> Hi,
>
> On Sun, Apr 26, 2009 at 7:43 PM, Tim Lahey  wrote:
> > I do it from a mathematical perspective. The code to do the variation
> > itself is
>
> Thanks Tim.  I played around further with current Sage to see what
> stuffs need to be improved in implementing functional derivative in Sage.
> I followed the mathematical definition that you did in maple. In Physics,
> one takes the test-function to be Dirac delta (that's the only difference).
>
> I tried with the following simple implementation
> -
> def fdiff(L, q):
>     """
>     Functional Derivative
>     """
>     var('epsilon');
>     f(t) = function('dirac_delta',t)  # Test function
>
>     return diff(L.substitute(q=q+epsilon*f(t)), epsilon).substitute(epsilon=0)
>
> # Example usage:
>
> # Time and Position
> var('t');  q(t) = function('q',t)
>
> # Lagrangian for Simple Pendulum
> L = (diff(q(t),t))^2/2 - q(t)^2/2
>
> # Action
> S = integrate(L, t)
>
> # Euler-Lagrange equation directly follows from variation of action
> fdiff(S,q)
> --
>
> In fact, above 'works' in current sage but only after misusing
> current "substitute" method.
>
> To implement proper functional derivative in current Sage,
> following stuffs need some work
>
> (1) Improved "substitute()":
>
> Currently, "substitute" works, if keyword is a symbolic variable.
> It doesn't seem to work when keyword is a symbolic function.  Does
> anyone know whether there are other ways in Sage to substitute
> a symbolic function by a symbolic expression?
>
> (2) dirac_delta(x):
>
> I am planning to use Sage to compute Poisson brackets where
> I need functional derivative (with Dirac delta as test-function).
> Maxima seems to have implemented Dirac delta (for Laplace
> transform). Does anyone working in implementing Dirac delta
> in Sage? (or in pynac?)
>
> I am planning to start working on the above two. However, I
> will prefer to avoid any effort duplication.
>
> Thanks,
> Golam
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread mabshoff



On May 2, 2:58 am, Alex Ghitza  wrote:
> On Fri, May 1, 2009 at 5:13 PM, mabshoff  wrote:
>
> > While going over the open tickets in 4.0 I noticed this ticket:
>
> >   #5943 (Sage 3.4.2.a0: prime_pi(2^50) segfaults)
>
> > If someone could take a stab at that it would be nice since that is
> > brand new code and ought to be a little bit more stable than that. I
> > am waiting on a fix for #5952 in the morning, i.e. Saturday, so some
> > hero might want to earn some brownie points over night :)
>
> Brownies are overrated.

:)

> I played around with prime_pi() for a while, both on sage.math and on
> my laptop at the office (macbook running 32-bit archlinux).  I didn't
> manage to get a segfault on either machine with prime_pi(2^50).  I
> guess that's the good news?  

Somewhat. As it turns out 3.4.2.a0 does not blow up, but 3.4.1 does:

mabsh...@sage:/scratch/mabshoff/sage-3.4.2.final$ sage
--
| Sage Version 3.4.1, Release Date: 2009-04-21   |
| Type notebook() for the GUI, and license() for information.|
--
sage: time prime_pi(2^50)
/usr/local/sage/local/bin/sage-sage: line 198: 13649 Segmentation
fault  sage-ipython "$@" -i
mabsh...@sage:/scratch/mabshoff/sage-3.4.2.final$ ./sage
--
| Sage Version 3.4.2.rc0, Release Date: 2009-04-30   |
| Type notebook() for the GUI, and license() for information.|
--
sage: time prime_pi(2^50)


> Anyway, the bad news is that the answers
> returned do not agree between the two machines, e.g.
>
> {{{
> # on sage.math
> sage: time prime_pi(2^50)
> CPU times: user 4854.46 s, sys: 3.17 s, total: 4857.63 s
> Wall time: 4857.73 s
> 33483379603407
> #
> # on my laptop
> sage: time prime_pi(2^50)
> CPU times: user .60 s, sys: 7.81 s, total: 5563.40 s
> Wall time: 5598.64 s
> 21969300962685
>
> }}}

Interesting. I tried to find a similar counter example with Solaris/
x86 and Solaris/Sparc, but it seems I wasn't patient enough and I did
also use two 32 bit versions.

What to do? Put a cap on the arguments prime_pi gives you an answer
and throw NotImplementedError otherwise, i.e. for some 2^n we feel
comfortable enough.

> I'm pretty sure that the sage.math answer is more likely to be the
> right one.  You can maybe guess from the timings why I didn't try
> prime_pi(2^51).  I have, however, tried smaller values.  I'm going to
> put that data up on the trac ticket.

Thanks.

> I have absolutely no idea what's wrong, but something definitely is,
> and hopefully this will help someone fix it.

The code uses doubles and a "pseudo" long double sqrt (with build in
error checking for the pseudo long double function). I am not really
surprised that we are seeing issues for large inputs and I am glad you
detected the problem :)

> Best,
> Alex

Cheers,

Michael

> --
> Alex Ghitza -- Lecturer in Mathematics -- The University of Melbourne
> -- Australia --http://www.ms.unimelb.edu.au/~aghitza/
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Wolfram Alpha and Google (Trendalyzer)

2009-05-02 Thread Andras Salamon

On Fri, May 01, 2009 at 10:32:41AM -0700, Brian Granger wrote:
> I bring this up because I think we need to have better reasons about
> why open source is important - arguments that are compelling to folks
> who have been working successfully for years without reading the
> source.  I don't know what these are, but I know that we need them.

(Speaking as an outsider, not having contributed to Sage, but having
contributed in small ways to several other projects.)

When I find a bug, access to the source code allows me either to
produce a strong test case _quickly_, or to produce a correct patch for
the problem.  Instead of punting the problem upstream, I can actually
contribute directly.  If the source is not available, I would need to
spend additional time to build a sufficiently rich conceptual model
of the black-box internals to be able to produce a decent test case.
Why would anyone want to invest time in such activity?

So by closing the source one is cutting down the pool of people
who are going to have the incentive to contribute meaningfully.
Only those who spend their days immersed in the environment of the
particular software package, and the paid employees of the software
company, will have the incentive to contribute meaningfully.

The "casual" users are thus excluded from contributing to closed
source systems.  If the number and quality of the people inside the
software ecosystem is sufficiently high, and remains high as the
software and people age, then this doesn't matter: there will be
sufficient force being applied to keep the system going.  But over
time projects that manage to harness contributions of those outside
the magic circle are in a very strong position, since they can achieve
much with few explicit resources.

Few open source projects manage to effectively harness contributions
from those outside: if the code is obscure, badly written, poorly
documented, the inherent problems are very hard, the project is badly
organized, or the project is subverted, then the external force of
casual contributors is dissipated.  The fourth in this list seems to be
the point that RJF is focusing on, combined perhaps with an assessment
that the total force is low, whereas William Stein seems to believe
that the external force is great, and can be usefully harnessed.

-- Andras Salamon   andras.sala...@comlab.ox.ac.uk

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread mabshoff



On May 2, 1:00 pm, mabshoff  wrote:
> On May 2, 2:58 am, Alex Ghitza  wrote:



> > I played around with prime_pi() for a while, both on sage.math and on
> > my laptop at the office (macbook running 32-bit archlinux).  I didn't
> > manage to get a segfault on either machine with prime_pi(2^50).  I
> > guess that's the good news?  
>
> Somewhat. As it turns out 3.4.2.a0 does not blow up, but 3.4.1 does:
>
> mabsh...@sage:/scratch/mabshoff/sage-3.4.2.final$ sage
> --
> | Sage Version 3.4.1, Release Date: 2009-04-21                       |
> | Type notebook() for the GUI, and license() for information.        |
> --
> sage: time prime_pi(2^50)
> /usr/local/sage/local/bin/sage-sage: line 198: 13649 Segmentation
> fault      sage-ipython "$@" -i
> mabsh...@sage:/scratch/mabshoff/sage-3.4.2.final$ ./sage
> --
> | Sage Version 3.4.2.rc0, Release Date: 2009-04-30                   |
> | Type notebook() for the GUI, and license() for information.        |
> --
> sage: time prime_pi(2^50)
> 

Ok, in hindsight it is pretty obvious why this doesn't segfault in
3.4.2.a0 any more:

mabsh...@sage:/scratch/mabshoff/sage-3.4.2.final$ ./sage
--
| Sage Version 3.4.2.rc0, Release Date: 2009-04-30   |
| Type notebook() for the GUI, and license() for information.|
--
sage: len(prime_range(2^50))
/scratch/mabshoff/sage-3.4.2.final/local/bin/sage-sage: line 198:
13833 Segmentation fault  sage-ipython "$@" -i

So I am rewriting the tickets: #5943 is about the still existing crash
in 3.4.2.final while #5963 is about the wrong results for prime_pi()
on some platforms.

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Functional derivative in Sage using pynac

2009-05-02 Thread William Stein

On Sat, May 2, 2009 at 12:54 PM, Maurizio  wrote:
>
> Hi,
>
> I had previously been interested in implementing delta dirac function
> in pynac, especially for Laplace transform, so maybe I can give you a
> couple of references.
>
> First of all, the (quite long) thread about this in the sage-devel
> group:
>
> http://groups.google.com/group/sage-devel/browse_frm/thread/91289956e4db80c6/368c6c89935b85ad?lnk=gst&q=delta+dirac+pynac#368c6c89935b85ad
>
> In the first response, Burcin had given a link to his (very basic)
> draft of the dirac_delta function that he sent me a couple of weeks
> ago.
>
> Moreover, I think another interesting thread is the one on the SymPy
> bugtracker:
>
> http://code.google.com/p/sympy/issues/detail?id=672
>
> At the end of this thread, they have the DiracDelta implemented, so
> you can actually read some python code implementing dirac delta,
> although based on SymPy.
>
> I can't give references about Maxima, because I'm not able to
> understand LISP, so that would have been useless to me.
>
> My final comment is that, I have understood that I should better wait
> for SAGE 4.0 to come back with these kind of issues, because that
> would be the first release (if I got it right) with Pynac enabled
> (maybe by default?), and so after that we can start working on
> building something on top of it.

Yes, that is exactly right.

>  I am not even sure if something in
> Pynac engine is not going under some rewriting or development. But I
> think that 4.0 is coming soon!

Don't worry -- 4.0 is indeed coming soon.

William

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread Dr. David Kirkby

Alex Ghitza wrote:
> On Fri, May 1, 2009 at 5:13 PM, mabshoff  wrote:
>> While going over the open tickets in 4.0 I noticed this ticket:
>>
>>   #5943 (Sage 3.4.2.a0: prime_pi(2^50) segfaults)
>>
>> If someone could take a stab at that it would be nice since that is
>> brand new code and ought to be a little bit more stable than that. I
>> am waiting on a fix for #5952 in the morning, i.e. Saturday, so some
>> hero might want to earn some brownie points over night :)
>>
> 
> Brownies are overrated.
> 
> I played around with prime_pi() for a while, both on sage.math and on
> my laptop at the office (macbook running 32-bit archlinux).  I didn't
> manage to get a segfault on either machine with prime_pi(2^50).  I
> guess that's the good news?  Anyway, the bad news is that the answers
> returned do not agree between the two machines, e.g.
> 
> {{{
> # on sage.math
> sage: time prime_pi(2^50)
> CPU times: user 4854.46 s, sys: 3.17 s, total: 4857.63 s
> Wall time: 4857.73 s
> 33483379603407
> #
> # on my laptop
> sage: time prime_pi(2^50)
> CPU times: user .60 s, sys: 7.81 s, total: 5563.40 s
> Wall time: 5598.64 s
> 21969300962685
> }}}
> 
> I'm pretty sure that the sage.math answer is more likely to be the
> right one.  You can maybe guess from the timings why I didn't try
> prime_pi(2^51).  I have, however, tried smaller values.  I'm going to
> put that data up on the trac ticket.

Mathematica 6 (on a Sun SPARC) gives an answer in far less time than Sage:


In[3]:= PrimePi[2^50]

PrimePi::largp:
Argument 1125899906842624 in PrimePi[1125899906842624]
  is too large for this implementation.

Out[3]= PrimePi[1125899906842624]


Well, perhaps not really an answer!


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread mabshoff



On May 2, 2:44 pm, "Dr. David Kirkby"  wrote:
> Alex Ghitza wrote:

Hi David,



> > I'm pretty sure that the sage.math answer is more likely to be the
> > right one.  You can maybe guess from the timings why I didn't try
> > prime_pi(2^51).  I have, however, tried smaller values.  I'm going to
> > put that data up on the trac ticket.
>
> Mathematica 6 (on a Sun SPARC) gives an answer in far less time than Sage:
>
> In[3]:= PrimePi[2^50]
>
> PrimePi::largp:
>     Argument 1125899906842624 in PrimePi[1125899906842624]
>       is too large for this implementation.
>
> Out[3]= PrimePi[1125899906842624]
>
> Well, perhaps not really an answer!

:)

Could you figure out what the upper bound is that MMA allows? I have
discussed this with William in IRC and in 3.4.2 we should just throw a
NotImplementedError for some bound where we are comfortable with
knowing the result is correct on 32 and 64 bit. Unfortunately this
isn't something we can doctest with a reasonable amount of time.

The suggestion then was to implement something on top of the range
computed with floats using MPFR for example, but we will see what
happens. I am sure that if I asked if someone needed to compute
prime_pi() for anything larger than 2^48 someone would say yes, so
this ought to be fixed.

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread Dr. David Kirkby

Alex Ghitza wrote:
> On Fri, May 1, 2009 at 5:13 PM, mabshoff  wrote:
>> While going over the open tickets in 4.0 I noticed this ticket:
>>
>>   #5943 (Sage 3.4.2.a0: prime_pi(2^50) segfaults)
>>
>> If someone could take a stab at that it would be nice since that is
>> brand new code and ought to be a little bit more stable than that. I
>> am waiting on a fix for #5952 in the morning, i.e. Saturday, so some
>> hero might want to earn some brownie points over night :)
>>
> 
> Brownies are overrated.
> 
> I played around with prime_pi() for a while, both on sage.math and on
> my laptop at the office (macbook running 32-bit archlinux).  I didn't
> manage to get a segfault on either machine with prime_pi(2^50).  I
> guess that's the good news?  Anyway, the bad news is that the answers
> returned do not agree between the two machines, e.g.
> 
> {{{
> # on sage.math
> sage: time prime_pi(2^50)
> CPU times: user 4854.46 s, sys: 3.17 s, total: 4857.63 s
> Wall time: 4857.73 s
> 33483379603407
> #
> # on my laptop
> sage: time prime_pi(2^50)
> CPU times: user .60 s, sys: 7.81 s, total: 5563.40 s
> Wall time: 5598.64 s
> 21969300962685
> }}}
> 
> I'm pretty sure that the sage.math answer is more likely to be the
> right one.  You can maybe guess from the timings why I didn't try
> prime_pi(2^51).  I have, however, tried smaller values.  I'm going to
> put that data up on the trac ticket.


You should have used Mathematica - it returns its answer very quickly:


Mathematica 6.0 for Sun Solaris SPARC (64-bit)
Copyright 1988-2008 Wolfram Research, Inc.

In[1]:= ?PrimePi
PrimePi[x] gives the number of primes \[Pi] (x) less than or equal to x.

In[2]:= PrimePi[2^50]

PrimePi::largp:
Argument 1125899906842624 in PrimePi[1125899906842624]
  is too large for this implementation.

Out[2]= PrimePi[1125899906842624]


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread William Stein

On Sat, May 2, 2009 at 2:50 PM, mabshoff  wrote:
>
>
>
> On May 2, 2:44 pm, "Dr. David Kirkby"  wrote:
>> Alex Ghitza wrote:
>
> Hi David,
>
> 
>
>> > I'm pretty sure that the sage.math answer is more likely to be the
>> > right one.  You can maybe guess from the timings why I didn't try
>> > prime_pi(2^51).  I have, however, tried smaller values.  I'm going to
>> > put that data up on the trac ticket.
>>
>> Mathematica 6 (on a Sun SPARC) gives an answer in far less time than Sage:
>>
>> In[3]:= PrimePi[2^50]
>>
>> PrimePi::largp:
>>     Argument 1125899906842624 in PrimePi[1125899906842624]
>>       is too large for this implementation.
>>
>> Out[3]= PrimePi[1125899906842624]
>>
>> Well, perhaps not really an answer!
>
> :)
>
> Could you figure out what the upper bound is that MMA allows? I have
> discussed this with William in IRC and in 3.4.2 we should just throw a
> NotImplementedError for some bound where we are comfortable with
> knowing the result is correct on 32 and 64 bit. Unfortunately this
> isn't something we can doctest with a reasonable amount of time.
>
> The suggestion then was to implement something on top of the range
> computed with floats using MPFR for example, but we will see what
> happens. I am sure that if I asked if someone needed to compute
> prime_pi() for anything larger than 2^48 someone would say yes, so
> this ought to be fixed.

Andrew looked into this whole issue a while ago, and told me that the
prime_pi he implemented *should* only work up to about 2^40, and the
algorithm would take far too long above there.   I thought he had
included an error message if the input exceeds 2^40, but I guess not.
   So +1 to your suggestion above, but with a smaller bound that 2^48.

He told me Mathematica can go up to about 2^45 or so, but not beyond.
The algorithm in Mathematica is completely different (and better) than
what Andrew implemented for Sage.   As far as I know the situation for
computing pi(X) using general purpose math software is thus:

   * Mathematica -- has the best implementation available

   * Sage -- now has the second best implementation available

   * Pari, Maple, Matlab -- "stupid" implementations, which all simply
enumerate all primes up to x and see how many there are.  Useless, and
can't come close to what Sage or Mathematica do.

 -- William

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread mabshoff



On May 2, 3:52 pm, William Stein  wrote:
> On Sat, May 2, 2009 at 2:50 PM, mabshoff  wrote:



> > The suggestion then was to implement something on top of the range
> > computed with floats using MPFR for example, but we will see what
> > happens. I am sure that if I asked if someone needed to compute
> > prime_pi() for anything larger than 2^48 someone would say yes, so
> > this ought to be fixed.
>
> Andrew looked into this whole issue a while ago, and told me that the
> prime_pi he implemented *should* only work up to about 2^40, and the
> algorithm would take far too long above there.   I thought he had
> included an error message if the input exceeds 2^40, but I guess not.
>    So +1 to your suggestion above, but with a smaller bound that 2^48.

Cool.

> He told me Mathematica can go up to about 2^45 or so, but not beyond.

At least for MMA 6.0 on linux x86-64 the limit seems to be around
2^47:

 MMASage

2^44:   18.04  110.88   (597116381732)
2^45:   29.98  207.61   (1166746786182)
2^46:   47.59  389.98   (2280998753949)
2^47:   89.25  728.84   (4461632979717)
2^48:   NA :)  about an hour - correct?

According to Alex's numbers at least on his laptop 2^46 was correct on
32 bits, but given the length of the test (~6 minutes on sage.math
this isn't really doctestable).

> The algorithm in Mathematica is completely different (and better) than
> what Andrew implemented for Sage.   As far as I know the situation for
> computing pi(X) using general purpose math software is thus:
>
>    * Mathematica -- has the best implementation available
>
>    * Sage -- now has the second best implementation available

Yep, the old implementation was about 1000 times slower than Andrew's
which is about 5 times slower than MMA 6.0 - so great job from
catching us up from 5000 times to only 5 times :).

>    * Pari, Maple, Matlab -- "stupid" implementations, which all simply
> enumerate all primes up to x and see how many there are.  Useless, and
> can't come close to what Sage or Mathematica do.

Well, what should we pick as upper bound? 2^40 seems to be what Andrew
suggests, but maybe 2^42 or 2^43? In that range we can actually add
#long doctests and I would be much more comfortable that we would at
least catch potential issues.

>  -- William

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage and symbolic software design

2009-05-02 Thread root

This is such an amusing thread. Try re-reading the thread as if everyone
were arguing that "we should improve Maxima because it is open source
and many people can improve upon it". Sure, you'd have to learn lisp
but Guido argues that python is lisp, so is the learning curve so steep?

On average over the lifetime of a project, code costs about $43 per line.
That means that Maxima is worth about $10,750,000 million dollars.
At a really productive rate of 100 lines of debugged code per day
that's 2500 days, or 10 years of work (5days x 50weeks)

If you had 250k lines of python and someone said "lets rewrite it in Java
because " then you would assume the person is misguided. But you
have 250k lines of lisp and now you argue "lets rewrite it in Python
because".

Maxima is old, reliable, solid code that can and does run in Sage.
There are porting issues but the total work required to debug a port
is completely dwarfed by the total work to write and debug a python verion.

If you believe that porting issues are lisp-specific, the transition to
python 3.0, 4.0, 5.0,  might convince you otherwise (eventually).

I don't have a dog in this fight so I don't care either way.
I'm just entertained by all of the "open source is great" arguments that
gets applied to everything but Maxima and lisp.

Tim Daly

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage and symbolic software design

2009-05-02 Thread William Stein

On Sat, May 2, 2009 at 10:34 AM, Nick Alexander  wrote:
>
> To everyone participating in this thread:
>
> PLEASE LET IT GO.
>
> This is a list discussing the development of sage, both technical and
> social aspect.  Is this thread helping?  Is this thread significantly
> different from previous incarnations?  Have any of those threads
> helped the sage project?
>
> I have stopped myself from writing this message twice, but three times
> is the charm for me.  PLEASE LET THIS GO.

I think discussions like the one above do actually have some value for
some people, but I agree that they do not belong on sage-devel.  I
have thus created a new mailing list sage-flame for all discussions
related to Sage that begin evolving into flame wars:

   http://groups.google.com/group/sage-flame

1. Anybody who actually finds some value in threads like the above
should subscribe (I've sent invites to Fateman and Tim Daly).

2. Any time a thread like the above starts on sage-devel/support/edu,
then it should *immediately* be moved to sage-flame, so that it
doesn't waste anybody's time or energy.

No further messages should be posted in the current thread.

 -- William

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread Dr. David Kirkby

mabshoff wrote:
ed smaller values.  I'm going to
>>> put that data up on the trac ticket.
>> Mathematica 6 (on a Sun SPARC) gives an answer in far less time than Sage:
>>
>> In[3]:= PrimePi[2^50]
>>
>> PrimePi::largp:
>> Argument 1125899906842624 in PrimePi[1125899906842624]
>>   is too large for this implementation.
>>
>> Out[3]= PrimePi[1125899906842624]
>>
>> Well, perhaps not really an answer!
> 
> :)
> 
> Could you figure out what the upper bound is that MMA allows? I have
> discussed this with William in IRC and in 3.4.2 we should just throw a
> NotImplementedError for some bound where we are comfortable with
> knowing the result is correct on 32 and 64 bit. Unfortunately this
> isn't something we can doctest with a reasonable amount of time.
> 
> The suggestion then was to implement something on top of the range
> computed with floats using MPFR for example, but we will see what
> happens. I am sure that if I asked if someone needed to compute
> prime_pi() for anything larger than 2^48 someone would say yes, so
> this ought to be fixed.
> 
> Cheers,
> 
> Michael

Hi Micheal,

Yes, I can figure it out.

The upper limit is PrimePi[249] (2.5x10^14), which 
Mathematica gives as 7783516108362. It took 20 minutes or so on a 
heavily loaded machine.

In[95]:= PrimePi[249]

Out[95]= 7783516108362

It can not manage PrimePi[250]

2^47 is 1.41x10^14,
2^48 is 2.81x10^14.

Since the maximum that can be handled is just under 2.5x10^14, 
Mathematica can compute PrimePi[2^47], but not PrimePi[2^48]

Here's a table of PrimePi[2^n], with n ranging from 0 to 47. It took 
roughly 20 minutes or so to compute the table.

In[19]:= Table[{n,PrimePi[2^n]},{n,0,47}]

Out[19]= {{0, 0}, {1, 1}, {2, 2}, {3, 4}, {4, 6}, {5, 11}, {6, 18}, {7, 31},

 >{8, 54}, {9, 97}, {10, 172}, {11, 309}, {12, 564}, {13, 1028},

 >{14, 1900}, {15, 3512}, {16, 6542}, {17, 12251}, {18, 23000},

 >{19, 43390}, {20, 82025}, {21, 155611}, {22, 295947}, {23, 564163},

 >{24, 1077871}, {25, 2063689}, {26, 3957809}, {27, 7603553},

 >{28, 14630843}, {29, 28192750}, {30, 54400028}, {31, 105097565},

 >{32, 203280221}, {33, 393615806}, {34, 762939111}, {35, 1480206279},

 >{36, 2874398515}, {37, 5586502348}, {38, 10866266172},

 >{39, 21151907950}, {40, 41203088796}, {41, 80316571436},

 >{42, 156661034233}, {43, 305761713237}, {44, 597116381732},

 >{45, 1166746786182}, {46, 2280998753949}, {47, 4461632979717}}


PS, Mathematica computes PrimePi[some_negative_number] as 0. Does Sage 
handle that case ok?

Dave






--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] NSF conference

2009-05-02 Thread root

I've just returned from the NSF conference.

There was a big push for teaching, especially related to CAS. I suggested
a joint effort with the game industry. The idea would be to use a game
like the bridge building game (www.bridgebuilder-game.com) and a CAS.

The idea of the bridge game is to construct a bridge and then apply a
load until it fails.  Students could start building a simple model of
the bridge by attaching matrices to the ends of the beam elements. 
Then they would predict the force to destroy the bridge and
be measured on how close their model is to the actual result. The next
class could add stress or strain or young's modulus or gravity load,
etc.  At the end of 13 weeks, the grades are given by the final
ranking kept by the game program.

The point of the exercise is to develop the skills to construct models
using a CAS.

This seems like a "python-can-do-it" kind of project.
The NSF person at the conference liked the idea a lot.

Tim Daly




--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread mabshoff



On May 1, 6:54 am, Kiran Kedlaya  wrote:
> Clean build on 64-bit Fedora 10 (Opteron) fails one doctest:
>
> sage -t  "devel/sage/sage/sets/primes.py"
> **
> File "/opt/sage/sage-3.4.2.rc0/devel/sage/sage/sets/primes.py", line
> 80:
>     sage: P>x^2+x
> Expected:
>     True
> Got:
>     False



For the record: This is now #5966 and will be fixed in 3.4.2.final.

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread kcrisman


>
> For the record: This is now #5966 and will be fixed in 3.4.2.final.

It also has been #5959, with a patch, since yesterday morning -
figured if I caused the trouble, I should fix it :)  That doesn't
address "needlessly starting Maxima" but unfortunately I won't be able
to address that til at least Monday.

HTH,
- kcrisman
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread mabshoff



On May 2, 6:39 pm, kcrisman  wrote:
> > For the record: This is now #5966 and will be fixed in 3.4.2.final.
>
> It also has been #5959, with a patch, since yesterday morning -
> figured if I caused the trouble, I should fix it :)  That doesn't
> address "needlessly starting Maxima" but unfortunately I won't be able
> to address that til at least Monday.

Thanks for the heads up, I closed it as dupe of #5966 since it is
trivial to fix. Note that #5959 was not marked properly to have a
patch.

> HTH,
> - kcrisman

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread Dr. David Kirkby

mabshoff wrote:

>> He told me Mathematica can go up to about 2^45 or so, but not beyond.
> 
> At least for MMA 6.0 on linux x86-64 the limit seems to be around
> 2^47:

As I said in the other post, the limit is PrimePi[249].

>  MMASage
> 
> 2^44:   18.04  110.88   (597116381732)
> 2^45:   29.98  207.61   (1166746786182)
> 2^46:   47.59  389.98   (2280998753949)
> 2^47:   89.25  728.84   (4461632979717)
> 2^48:   NA :)  about an hour - correct?
> 
> According to Alex's numbers at least on his laptop 2^46 was correct on
> 32 bits, but given the length of the test (~6 minutes on sage.math
> this isn't really doctestable).
> 
>> The algorithm in Mathematica is completely different (and better) than
>> what Andrew implemented for Sage.   As far as I know the situation for
>> computing pi(X) using general purpose math software is thus:

One (admitidly unlikely) possibility is that Wolfram have used the same 
algorithm, but used assembly code. I think they claim the kernel is 
C/C++, but that permits bits of inline assembly.


>>* Mathematica -- has the best implementation available
>>
>>* Sage -- now has the second best implementation available
> 
> Yep, the old implementation was about 1000 times slower than Andrew's
> which is about 5 times slower than MMA 6.0 - so great job from
> catching us up from 5000 times to only 5 times :).

I gather you have managed to get a SPARC version of Sage running. It 
would be interesting to compare the performance on SPARC of Mathematica 
and Sage. I very much doubt Wolfram would have used hand-optimised 
assembly code on SPARC, so if the timings still show Mathematica to be 
5x quicker, then yes, I would agree it's a better algorithm. But if the 
timings are very similar, it suggests to me that perhaps the better 
performance of Mathematica is due to writing assembly code, rather than 
using a high-level language.

>>* Pari, Maple, Matlab -- "stupid" implementations, which all simply
>> enumerate all primes up to x and see how many there are.  Useless, and
>> can't come close to what Sage or Mathematica do.

> Well, what should we pick as upper bound? 2^40 seems to be what Andrew
> suggests, but maybe 2^42 or 2^43? In that range we can actually add
> #long doctests and I would be much more comfortable that we would at
> least catch potential issues.

Personally, if Sage can go to 249, then I would use that as 
an upper bound. If the algorithm in Sage gives the same answer as 
Mathematica

In[95]:= PrimePi[249]
Out[95]= 7783516108362

then why not use Sage to there? The poor SPARC I used for this was very 
heavily loaded and quite old, but it only took 20 minutes or so. Even an 
algorithm that is 5x slower should be able to compute 
primepi(249) in under an hour on any half-decent modern 
machine.

Of course, a search of the literature for a better algorithm might have 
some millage. Unfortunately, I no longer have access to a university 
account and so can't search journals without paying for access to 
individual ones, which clearly I can't justify. Neither am I 
particularly good at maths, having never maths studied beyond that 
needed for an engineering degree


One unfortunate side-effect of the closed-source vs open-source code is 
the fact that if open-source code (e.g. Sage) has a faster algorithm 
than Mathematica, then it's relatively easy for WRI to look at the 
algorithm in Sage and use the same one, to bring Mathematica and Sage to 
similar performances. The converse is not true of course - it is not 
possibly for Sage developers to find out the algorithm Mathematica uses. 
However, a look at

http://mathworld.wolfram.com/PrimeCountingFunction.html

might give some clues as to the algorithm Mathematica uses. Although the 
algorithm Mathematica is not stated in that Mathworld page, there are a 
number of references. Two look interesting:

Mapes, D. C. "Fast Method for Computing the Number of Primes Less than a 
Given Limit." Math. Comput. 17, 179-185, 1963.

Gourdon, X. "New Record Computation for pi(x), x=10^21." 27 Oct 2000. 
http://listserv.nodak.edu/scripts/wa.exe?A2=ind0010&L=nmbrthry&P=2988

The link 
http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind0010&L=nmbrthry&P=2988 
states the algorithm used, but in a way I don't understand. It says:

"This value has been checked by computing pi(10^21+10^8) with
a different parameter y used in the algorithm"

but y is not defined!




--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread William Stein

On Sat, May 2, 2009 at 11:42 PM, Dr. David Kirkby
 wrote:
>
> mabshoff wrote:
>
>>> He told me Mathematica can go up to about 2^45 or so, but not beyond.
>>
>> At least for MMA 6.0 on linux x86-64 the limit seems to be around
>> 2^47:
>
> As I said in the other post, the limit is PrimePi[249].
>
>>          MMA        Sage
>>
>> 2^44:   18.04      110.88   (597116381732)
>> 2^45:   29.98      207.61   (1166746786182)
>> 2^46:   47.59      389.98   (2280998753949)
>> 2^47:   89.25      728.84   (4461632979717)
>> 2^48:   NA :)      about an hour - correct?
>>
>> According to Alex's numbers at least on his laptop 2^46 was correct on
>> 32 bits, but given the length of the test (~6 minutes on sage.math
>> this isn't really doctestable).
>>
>>> The algorithm in Mathematica is completely different (and better) than
>>> what Andrew implemented for Sage.   As far as I know the situation for
>>> computing pi(X) using general purpose math software is thus:
>
> One (admitidly unlikely) possibility is that Wolfram have used the same
> algorithm, but used assembly code. I think they claim the kernel is
> C/C++, but that permits bits of inline assembly.
>
>
>>>    * Mathematica -- has the best implementation available
>>>
>>>    * Sage -- now has the second best implementation available
>>
>> Yep, the old implementation was about 1000 times slower than Andrew's
>> which is about 5 times slower than MMA 6.0 - so great job from
>> catching us up from 5000 times to only 5 times :).
>
> I gather you have managed to get a SPARC version of Sage running. It
> would be interesting to compare the performance on SPARC of Mathematica
> and Sage. I very much doubt Wolfram would have used hand-optimised
> assembly code on SPARC, so if the timings still show Mathematica to be
> 5x quicker, then yes, I would agree it's a better algorithm. But if the
> timings are very similar, it suggests to me that perhaps the better
> performance of Mathematica is due to writing assembly code, rather than
> using a high-level language.
>
>>>    * Pari, Maple, Matlab -- "stupid" implementations, which all simply
>>> enumerate all primes up to x and see how many there are.  Useless, and
>>> can't come close to what Sage or Mathematica do.
>
>> Well, what should we pick as upper bound? 2^40 seems to be what Andrew
>> suggests, but maybe 2^42 or 2^43? In that range we can actually add
>> #long doctests and I would be much more comfortable that we would at
>> least catch potential issues.
>
> Personally, if Sage can go to 249, then I would use that as
> an upper bound. If the algorithm in Sage gives the same answer as
> Mathematica
>
> In[95]:= PrimePi[249]
> Out[95]= 7783516108362
>
> then why not use Sage to there? The poor SPARC I used for this was very
> heavily loaded and quite old, but it only took 20 minutes or so. Even an
> algorithm that is 5x slower should be able to compute
> primepi(249) in under an hour on any half-decent modern
> machine.
>
> Of course, a search of the literature for a better algorithm might have
> some millage. Unfortunately, I no longer have access to a university
> account and so can't search journals without paying for access to
> individual ones, which clearly I can't justify. Neither am I
> particularly good at maths, having never maths studied beyond that
> needed for an engineering degree
>
>
> One unfortunate side-effect of the closed-source vs open-source code is
> the fact that if open-source code (e.g. Sage) has a faster algorithm
> than Mathematica, then it's relatively easy for WRI to look at the
> algorithm in Sage and use the same one, to bring Mathematica and Sage to
> similar performances. The converse is not true of course - it is not
> possibly for Sage developers to find out the algorithm Mathematica uses.
> However, a look at
>
> http://mathworld.wolfram.com/PrimeCountingFunction.html
>
> might give some clues as to the algorithm Mathematica uses. Although the
> algorithm Mathematica is not stated in that Mathworld page, there are a
> number of references. Two look interesting:
>
> Mapes, D. C. "Fast Method for Computing the Number of Primes Less than a
> Given Limit." Math. Comput. 17, 179-185, 1963.
>
> Gourdon, X. "New Record Computation for pi(x), x=10^21." 27 Oct 2000.
> http://listserv.nodak.edu/scripts/wa.exe?A2=ind0010&L=nmbrthry&P=2988
>
> The link
> http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind0010&L=nmbrthry&P=2988
> states the algorithm used, but in a way I don't understand. It says:
>
> "This value has been checked by computing pi(10^21+10^8) with
> a different parameter y used in the algorithm"
>
> but y is not defined!

This page:

http://reference.wolfram.com/mathematica/note/SomeNotesOnInternalImplementation.html#12788

says what algorithm is used in Mathematica to compute PrimePi, and it
is definitely *not* the one used in Sage.  It is a completely
different harder-to-implement algorithm with better asymptotic
complexity.

In any cas

[sage-devel] Re: Sage 3.4.2.rc0 release!

2009-05-02 Thread Dr. David Kirkby

mabshoff wrote:

>> He told me Mathematica can go up to about 2^45 or so, but not beyond.
> 
> At least for MMA 6.0 on linux x86-64 the limit seems to be around
> 2^47:

As I said in the other post, the limit is PrimePi[249].

>  MMASage
> 
> 2^44:   18.04  110.88   (597116381732)
> 2^45:   29.98  207.61   (1166746786182)
> 2^46:   47.59  389.98   (2280998753949)
> 2^47:   89.25  728.84   (4461632979717)
> 2^48:   NA :)  about an hour - correct?
> 
> According to Alex's numbers at least on his laptop 2^46 was correct on
> 32 bits, but given the length of the test (~6 minutes on sage.math
> this isn't really doctestable).
> 
>> The algorithm in Mathematica is completely different (and better) than
>> what Andrew implemented for Sage.   As far as I know the situation for
>> computing pi(X) using general purpose math software is thus:

One (admitidly unlikely) possibility is that Wolfram have used the same 
algorithm, but used hand-optimised assembly code, rather than a 
high-level language I think WRI claim the kernel is C/C++, but that 
permits bits of inline assembly.


>>* Mathematica -- has the best implementation available
>>
>>* Sage -- now has the second best implementation available
> 
> Yep, the old implementation was about 1000 times slower than Andrew's
> which is about 5 times slower than MMA 6.0 - so great job from
> catching us up from 5000 times to only 5 times :).

I gather you have managed to get a SPARC version of Sage running. It 
would be interesting to compare the performance on SPARC of Mathematica 
and Sage. I very much doubt Wolfram would have used hand-optimised 
assembly code on SPARC, so if the timings still show Mathematica to be 
5x quicker, then yes, I would agree it's a better algorithm. But if the 
timings are very similar, it suggests to me that perhaps the better 
performance of Mathematica is due to writing assembly code, rather than 
using a high-level language.

>>* Pari, Maple, Matlab -- "stupid" implementations, which all simply
>> enumerate all primes up to x and see how many there are.  Useless, and
>> can't come close to what Sage or Mathematica do.

> Well, what should we pick as upper bound? 2^40 seems to be what Andrew
> suggests, but maybe 2^42 or 2^43? In that range we can actually add
> #long doctests and I would be much more comfortable that we would at
> least catch potential issues.

Personally, if Sage can go to 249, then I would use that as 
an upper bound. If the algorithm in Sage gives the same answer as 
Mathematica

In[95]:= PrimePi[249]
Out[95]= 7783516108362

then why not use Sage to there? The poor SPARC I used for this was very 
heavily loaded and quite old, but it only took 20 minutes or so. Even an 
algorithm that is 5x slower should be able to compute 
primepi(249) in under an hour on any half-decent modern 
machine.

Of course, a search of the literature for a better algorithm might have 
some millage. Unfortunately, I no longer have access to a university 
account and so can't search journals without paying for access to 
individual ones, which clearly I can't justify. Neither am I 
particularly good at maths, having never maths studied beyond that 
needed for an engineering degree

A look at the Mathworld entry for the Prime Counting Function

http://mathworld.wolfram.com/PrimeCountingFunction.html

might give some clues as to the algorithm Mathematica uses. Although the 
algorithm Mathematica is not stated in that Mathworld page, there are a 
number of references. Two look interesting:

Mapes, D. C. "Fast Method for Computing the Number of Primes Less than a 
Given Limit." Math. Comput. 17, 179-185, 1963.

Gourdon, X. "New Record Computation for pi(x), x=10^21." 27 Oct 2000. 
http://listserv.nodak.edu/scripts/wa.exe?A2=ind0010&L=nmbrthry&P=2988

The link 
http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind0010&L=nmbrthry&P=2988 
states the algorithm used, but in a way I don't understand. It says:

"This value has been checked by computing pi(10^21+10^8) with
a different parameter y used in the algorithm"

But y is not defined! I'll try to drop the author of that post an email, 
to see if he knows of any fast algorithm that might be applicable to the 
general purpose case.

Since PrimePi[n] been computed to n=10^21 and both Mathematica and Sage 
can't do PrimePi[10^15], there may well be a *lot* faster algorithm known.


Dave

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---