some partial answers to a subset of those questions:
On Sep 12, 2007, at 7:26 PM, John Voight wrote: > (1) At a certain moment in the algorithm, I need to test if a > polynomial with integer coefficients is squarefree (using exact > arithmetic). Is it acceptable practice to do this in a brutal way, > like: > > import sage.rings.all PolynomialRing, IntegerRing, gcd > ZZx = PolynomialRing(IntegerRing(), 'x') > [...] > f = ZZx([...]) > df = ZZx([...]) > if gcd(f,df) <> 1: > What exactly is happening internally there? And what is the overhead > and effect on the timing? Presently, it's creating a Polynomial_integer_dense object which wraps an ntl_ZZX object (from ntl.pyx) which in turns wraps an NTL ZZX polynomial. Then NTL is doing the arithmetic. If the polynomial are "large" (whatever that means) then all this wrapping won't incur significant overhead. If they are small then the overhead may be a problem. How large are the polynomials (degree, bits per coefficient)? > (1a) Shouldn't the integer polynomial gcd work using modular > arithmetic to avoid coefficient blow-up? Or does it do this already? > (If gcd(f,g) mod p is 1 for p >> 0 then gcd(f,g) = 1, etc.) NTL does this already. > (3) I understand that there is significantly more overhead when a > function is declared using > cdef incr(self) > instead of > def incr(self). > in tr_data.spyx. But if I do this, how do I then access incr in > totallyreal.py? You mean significant *less* overhead? You can't access a cdef method directly from python. It can only be accessed from cython. david --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/ -~----------~----~----~----~------~----~------~--~---