On Mon, Dec 22, 2008 at 9:57 AM, John Cremona <john.crem...@gmail.com> wrote:
>
> 2008/12/22 M. Yurko <myu...@gmail.com>:
>>
>> It is the exponential integral, but I want greater than double
>> precision. I tried PARI, and it reports the number in arbitrary
>> precision, but it is only seems to be accurate to double precision as
>> all digits after are completely wrong. Also, I'm trying to compare a
>
> You should really report this as a bug to pari (though before you do,
> please check if there is still a problem with the latest version
> (2.4.2) of pari.  The version currently used in Sage is older).
>
> Can you give an example showing how pari is wrong?

I would bet money the user (or sage) is somehow mis-using pari, and
that there isn't a bug in pari itself.  It's easy to get confused
about how to do arbitrary precision using pari.

William

>
> I have a multiprecision version of this function in eclib (in C++,
> using NTL's RR type) but it is not currently wrapped in Sage.
>
> John Cremona
>
>> few a acceleration ideas for the defining sequence, and this is just
>> the baseline to which I want to compare the other versions.
>>
>> On Dec 22, 11:43 am, "William Stein" <wst...@gmail.com> wrote:
>>> On Mon, Dec 22, 2008 at 8:40 AM, John Cremona <john.crem...@gmail.com> 
>>> wrote:
>>>
>>> > That looks very like the exponential integral you are computing.  If
>>> > so, you can use Sage's function Ei() which calls scipy's
>>> > special.exp1().
>>>
>>> Watch out, since scipy is double precision only.
>>>
>>> Pari has a real-only exponential integral that is arbitrary precision 
>>> though.
>>>
>>>  -- William
>>>
>>>
>>>
>>>
>>>
>>> > John Cremona
>>>
>>> > 2008/12/22 M. Yurko <myu...@gmail.com>:
>>>
>>> >> Alright, below is the original python implementation of the function:
>>>
>>> >> def python(x,bits):
>>> >>    i = 1
>>> >>    sum_current = RealNumber(x,min_prec=bits)
>>> >>    sum_last = 0
>>> >>    term = sum_current
>>> >>    add_term = (ln(x)+euler_gamma).n(bits)
>>> >>    while sum_current != sum_last:
>>> >>        i+=1
>>> >>        term = term*(x)/(i)
>>> >>        sum_last = sum_current
>>> >>        sum_current += term/i
>>> >>    return sum_current + add_term
>>>
>>> >> Then my original cython version at double precision:
>>> >> %cython
>>> >> cdef extern from "math.h":
>>> >>    double log(double x)
>>> >> def cython_double(long double x):
>>> >>    cdef int i = 1
>>> >>    cdef double sum_current = x
>>> >>    cdef double sum_last = 0
>>> >>    cdef double term = sum_current
>>> >>    cdef double add_term = log(x)+ 0.577215664901533
>>> >>    while sum_current != sum_last:
>>> >>        i+=1
>>> >>        term = term*(x)/(i)
>>> >>        sum_last = sum_current
>>> >>        sum_current += term/i
>>> >>    return sum_current + add_term
>>>
>>> >> And finally, the cython version using RealNumber:
>>> >> %cython
>>> >> from sage.rings.real_mpfr cimport RealNumber
>>> >> import sage.all
>>> >> from sage.all import log
>>> >> from sage.all import n
>>> >> def cython_arbitrary(x, int bits):
>>> >>    cdef int i = 1
>>> >>    cdef RealNumber sum_current = sage.all.RealNumber(x,min_prec=bits)
>>> >>    cdef RealNumber sum_last = sage.all.RealNumber(0, min_prec=bits)
>>> >>    cdef RealNumber term = sum_current
>>> >>    cdef RealNumber add_term = sage.all.RealNumber(log(x).n(bits) +
>>> >> 0.577215664901533, min_prec=bits)
>>> >>    while sum_current != sum_last:
>>> >>        i+=1
>>> >>        term = term*(x)/(i)
>>> >>        sum_last = sum_current
>>> >>        sum_current += term/i
>>> >>    return sum_current + add_term
>>>
>>> >> When I timed these functions over 1 through 700 at 53 bits of
>>> >> precision, the python one took 5.46 sec., the double precision cython
>>> >> one took only .02 sec., and the arbitrary precision one took 6.77 sec.
>>> >> After looking at the .c file that cython generated, it seems to be
>>> >> doing a lot of conversions as simply initializing sum_current took
>>> >> almost 20 long lines of code.
>>> >> On Dec 22, 10:24 am, "Mike Hansen" <mhan...@gmail.com> wrote:
>>> >>> Hello,
>>>
>>> >>> On Mon, Dec 22, 2008 at 6:10 AM, M. Yurko <myu...@gmail.com> wrote:
>>>
>>> >>> > Thanks for your help. I tried your first and last suggestions, but
>>> >>> > they yielded code that was slower than the original python
>>> >>> > implementation. However, I'll take a look at sage.rings.real_mpfr and
>>> >>> > try to use mpfr directly.
>>>
>>> >>> Well, If I were to guess, it's probably because of the way the Cython
>>> >>> code is written.  Often when it is slower.that the Python, it means
>>> >>> that Cython is doing a lot of conversions behind the scene.  If you
>>> >>> were to post the code somewhere, I'm sure someone could take a look at
>>> >>> it and let you know.  Also the annotated version obtained by "cython
>>> >>> -a" is useful in tracking these things down.
>>>
>>> >>> --Mike
>>>
>>> --
>>> William Stein
>>> Associate Professor of Mathematics
>>> University of Washingtonhttp://wstein.org
>> >
>>
>
> >
>



-- 
William Stein
Associate Professor of Mathematics
University of Washington
http://wstein.org

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to 
sage-support-unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to