On Sep 9, 9:17 pm, Robert Bradshaw <rober...@math.washington.edu>
wrote:
> On Thu, Sep 9, 2010 at 11:44 AM, KvS <keesvansch...@gmail.com> wrote:
> > On Sep 9, 5:27 pm, Robert Bradshaw <rober...@math.washington.edu>
> > wrote:
> >> On Wed, Sep 8, 2010 at 6:39 PM, KvS <keesvansch...@gmail.com> wrote:
> >> > Thanks all, however I am not very successful so far :(.
>
> >> > I tried both options mentioned before:
>
> >> > - only optimize the loops in Cython and keep using symbolic
> >> > expressions/infinite precision, but this is unfortunately rather
> >> > slow;
>
> >> What do you mean by symbolic? Are you actually using maxima and the
> >> various calculus modules?
>
> > By symbolic/infinite precision I just mean keeping quantities like say
> > log(2) as they are, rather than approximating them by some finite
> > precision real number, sorry for the confusion. No, I am not using
> > Maxima or anything more advanced than the elementary operations, the
> > exponential function and the find_root function.
>
> >> > - fully optimize in Cython by turning to doubles everywhere, although
> >> > it gets very fast here happens what I was already afraid of: the
> >> > algorithm goes belly up after 15 steps :( (I would need a lot more
> >> > steps). In step 14 values like 2.51885860e-34 appear and in
> >> > combination with the numerical noise gathered this turns out to
> >> > produce very wrong values at the next step.
>
> >> > @Robert: I'm trying to implement an algorithm that computes a sequence
> >> > of functions v_k according to the rule v_k(x) = \max \{ K-e^x, \int
> >> > v_{k-1}(y) p(x+y) dy \}, v_0(x)=K-e^x, where p is a prob. density. The
> >> > background is an optimal stopping problem in stochastics. So at each
> >> > step I essentially first compute the integral and then determine where
> >> > the integral and x -> K-e^x intersect, this just by the basic
> >> > find_root function in Sage.
>
> >> > It's not difficult (however very annoying...) to write down a formula
> >> > for the integral (given the specific density p I'm using) and work out
> >> > formulae for all the coefficients that appear in this formula for the
> >> > integral in terms of those appearing in v_{k-1}. The number of
> >> > coefficients (and hence computations) involved in the formula for v_k
> >> > increases very quickly with k, that explains why the algorithm gets
> >> > slow in 'symbolic mode'. However 'double mode' is not good enough as
> >> > mentioned above.
>
> >> It sounds like you're trying too extreems--how about using your
> >> "double mode" algorithm, but instead using elements of RealField(N)
> >> where N > 53. This should be much faster than "symbolic" (if I
> >> understand you right, calling find_root and integrate) but have higher
> >> precision than using raw doubles. This may be enough, without getting
> >> your hands on MPFR directly yourself.
>
> > (I don't need to do any integration in the code, sorry if my previous
> > post seemed to suggest that, the integral that occurs there I have
> > worked out completely in terms of elementary operations.) Yes, I have
> > tried your suggestion of using RealField (without Cython though) and
> > this works good in the sense that the added precision in comparison
> > with floats makes the algorithm produce reliable output, thanks!
>
> > However, it is still too slow.
>
> Just out of curiosity, how much did it help?

To give an idea (will depend on chosen input), I just produced timings
for one particular input with 5 steps in the algorithm, and symbolic/
infinite precision takes ~12 secs versus ~0.2 secs using
RealField(100) rather.

>
> > If I could further speed it up by a few
> > factors that would be great. I will further try if I can figure out
> > how to use Cython to speed up parts of the algorithm where a lot of
> > looping and computing is done.
>
> That should be feasible--you probably don't need to change the whole
> thing--profile to see what parts. Excessive coercion may also impact
> things.
>
>
>
> >> You might also consider looking at how the dickman rho function is
> >> implementedhttp://hg.sagemath.org/sage-main/file/b5dab6864f35/sage/functions/tra...
> >> which sounds similar in spirit.
>
> >> > I will try your suggestion for using mpfr, thanks Robert and Greg!
> >> > (however it looks a bit scary to me ;)). I need to store the huge
> >> > amounts of coefficients somewhere, I guess numpy is my best guess even
> >> > though I have to declare the dtype as object then, no?
>
> >> What do you mean by huge? Would a list suffice? Or if it's a
> >> polynomial, what about a polynomial in RR? For high precision, the
> >> per-element overhead is constant, so I don't see an advantage to using
> >> NumPy here just as a storage divice if the dtype is object.
>
> > The function v_k is not a polynomial but a piecewise function
> > consisting of linear combinations of terms a*x^b*exp(c*x) (*) in each
> > part of its domain. 'Huge' means that the number of coefficients (i.e.
> > all the a, b and c's in (*)) to be computed in the k-th step of the
> > algorithm roughly grows like k^2. These coefficients are currently
> > organized in 4-dimensional Numpy-arrays: the k runs along one axis and
> > there are three other 'counters' (one to keep track of the part of the
> > domain etc.). I would prefer not to give up this way of representing
> > the coefficients as my notes use this form as well. Would you say I
> > could better use 4-d Python lists rather?
>
> No, for multi-dimensional stuff, NumPy is better, even if you're using
> the object dtype.

Ok, I will keep the NumPy approach then, thanks!

>
> - Robert

-- 
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to 
sage-support+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org

Reply via email to