Dear all, Thanks a lot for your help. It looks like there is yet another useful function going to be implemented in sage. Just to give an example where quad is faster than GSL (sorry for the length):
sage: def insol1(epsilon,ecc,varpi,phi,lambd): ... S0=1368 ... phirad=phi*pi/180. ... lambdarad=lambd*pi/180. ... ... sindelta = sin(epsilon)*sin(lambdarad) ... cosdelta = sqrt(1-sindelta^2) ... nu = lambdarad-varpi # true anomaly ... sin_phi_sin_delta = sin(phirad)*sindelta ... cos_phi_cos_delta = cos(phirad)*cosdelta ... tan_phi_tan_delta = min(max(sin_phi_sin_delta / cos_phi_cos_delta,-1),1) ... H0 = acos(-tan_phi_tan_delta) # solar time of sunset ... sin_H0 = sqrt(1-tan_phi_tan_delta*tan_phi_tan_delta) #2*H0: day length ... rho = (1-ecc^2)/(1+ecc*cos(nu)) # earth-sun distance in astronomical units ... W = S0/(pi*rho^2)*(H0*sin_phi_sin_delta +cos_phi_cos_delta*sin_H0) # daily mean shortwave down ... dtdl = rho^2 / sqrt(1-ecc^2) # 1/(dlambda/dt) ... return W*dtdl sage: timeit('quad(lambda x: insol(0.3,0.05,0,80,x),0,360)') 5 loops, best of 3: 711 ms per loop sage: timeit('numerical_integral(lambda x: insol(0.3,0.05,0,80,x), 0,360)') 5 loops, best of 3: 1.06 s per loop The difference does not seem to be very large, though. Cheers Stan On Dec 5, 6:17 am, Jason Grout <[EMAIL PROTECTED]> wrote: > William Stein wrote: > >> Should we phase GSL out of numerical_integral too? Should we replace it > >> with the equivalent scipy call (which would make it massively shorter > >> and simpler)? > > > Yes, it is very tempting to do so. One thing is that each function > > evaluation > > could in theory be much faster with the GSL version, since GSL takes > > as input a C-level callback function, whereas I think scipy's quadpack > > wrapper doesn't. > > Using the example from this thread: > > sage: timeit('quad(ff, 0, 18)') > 625 loops, best of 3: 157 µs per loop > sage: timeit('numerical_integral(ff, 0, 18)') > 625 loops, best of 3: 78.4 µs per loop > > So GSL is twice as fast. > > A few more examples pulled from thin air: > > sage: f=250*cos(pi*x/180)^1.8 + 170.35*sin(x*pi)+log(1+x) > sage: ff = fast_float(f, 'x') > sage: timeit('quad(ff, 0, 18)') > 625 loops, best of 3: 136 µs per loop > sage: timeit('numerical_integral(ff, 0, 18)') > 625 loops, best of 3: 104 µs per loop > sage: f=250*cos(pi*x/180)^1.8 + > 170.35*sin(x*pi)+log(1+x)+1/sqrt(1+x)+x^(0.3) > sage: ff = fast_float(f, 'x') > sage: timeit('quad(ff, 0, 18)') > 625 loops, best of 3: 699 µs per loop > sage: timeit('numerical_integral(ff, 0, 18)') > 625 loops, best of 3: 142 µs per loop > > In each case, GSL is better or way better. > > > > > Jason said: > > >> Both GSL and scipy call quadpack. > > > I'm not sure exactly what this means, since probably scipy's quadpack > > is a fortran library, but GSL is definitely built 100% fortran free > > (there's no fortran code in gsl and no fortran dependencies). Maybe > > GSL has a C port of quadpack, or some other sort of translation of the > > code. So I suspect GSL and scipy are calling into separate separate > > code that is compiled using a different compiler, so there could be > > differences in performance and capabilities. > > Sorry; from the GSL docs: > > The library reimplements the algorithms used in quadpack, a numerical > integration package written by Piessens, Doncker-Kapenga, Uberhuber and > Kahaner. Fortran code for quadpack is available on Netlib. > > So you're right, there could be performance differences in the library > alone. > > Jason --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-support@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-support URLs: http://www.sagemath.org -~----------~----~----~----~------~----~------~--~---