I'm sure the overhead is tiny, but you are also calculating f' in each
iteration.

On 9/5/07, Robert Bradshaw <[EMAIL PROTECTED]> wrote:
>
>
> On Sep 5, 2007, at 1:25 PM, John Voight wrote:
>
> > Wow Robert, thanks!  Talk about real-time recreating the wheel--my
> > code is quite similar, though I was using floats.
>
> Well, it was pretty quick to write, and I was curious as to the
> accuracy it would give. I wrote an mpfr-based root refiner too,
> trying to think of the best place to stick it into the sage library...
>
> >
> > Three questions for ya:
> >
> > (1) Is there a reason to use Py_ssize_t instead of int or even
> > unsigned int?  If it's for length, I can't imagine we'd have
> > polynomials of that huge a degree...!
>
> Mostly habit and for consistency, but if int != Py_ssize_t, I'd
> imagine there could be an assembly op or two saved doing padding the
> former when its being used as an index (though maybe not). Also, as
> of a little bit ago, the coercion of an Py_ssize_t uses the __index__
> function while coercion to an int truncates. I think the c compiler
> may complain if you try to assign an int from a Py_ssize_t on a 64-
> bit platform too.
>
> > (2) In my code, I specified an eps which controlled the number of
> > iterations rather than trying to predict that in advance--makes more
> > sense, no?
>
> There are pros and cons to both cases. Suppose x is the true root and
> x' is the is the returned value. Your method guarantees a bound on |f
> (x')| whereas mine guarantee a bound on |x-x'|. (I suppose you could
> use f'(x') to approximate the error |x-x'| as well). Yours has the
> advantage that you don't have to input the precision of the initial
> value, but the disadvantage is that it is not guaranteed to terminate
> (though it probably will).
>
> > (3) In double_newton_raphson, aren't we eating a lot of overhead with
> > lines like f = list(f) and copying the coefficients from f to coeffs?
> > I tried to get around this myself, but due to my status as a cherub
> > SAGE programmer, I couldn't ascertain the right data types to use...
>
> Doing f = list(f) is a bit of overhead, though not too much if f is
> already a list and faster than indexing into a polyomial. (though one
> could check for that first). Copying the coefficients to coeffs,
> which is a raw c-array of doubles, saves you a lot of time in the
> evaluating phase as it now operates on pure c data types instead of
> calling back to python. (Multiplying 2 c doubles can be nearly two
> orders of magnitude faster than multiplying 2 python floats).
>
> >
> > Yours,
> >
> > John Voight
> > Assistant Professor of Mathematics
> > University of Vermont
> > [EMAIL PROTECTED]
> > [EMAIL PROTECTED]
> > http://www.cems.uvm.edu/~voight/
> >
> >
> >
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to