On Nov 16, 2006, at 1:53 AM, Martin Albrecht wrote:

>> But I can see why it would be faster, given all the crap that sits
>> between us and those 16 bits.
>>
>> I don't necessarily have a problem with what you're doing, but in the
>> long run, we're better off just bloody well implementing the fields
>> ourselves.
>
> I don't understand that: You say, you want to reimplement the  
> finite ext.
> field for GF(q) stuff to gain q * wordsize of RAM by calculating on  
> Python
> objects instead of ints on the lowest level? That could make the  
> whole stuff
> slower (if you'd work on pointers to Python objects) and would  
> definitely
> increase the required RAM for certain applications. Right now, we  
> could
> implement any matrix over GF(q) as a matrix with int entries,  
> multivariate
> polynomials over GF(q) as polynomials over ints, etc. without using  
> any
> Python stuff internally. Using Python objects at the inner most  
> level would
> prevent that. Also, it feels like reinventing the wheel while  
> driving a
> Porsche (... couldn't resist that pun).

Ah. Are you saying what you've done is a bit like the situation where  
Python caches int object?

David


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to