On 4/4/2011 1:14 PM, Terry Reedy wrote:
On 4/4/2011 5:23 AM, Paul Rubin wrote:
Gregory Ewing<greg.ew...@canterbury.ac.nz> writes:
What might help more is having bytecodes that operate on
arrays of unboxed types -- numpy acceleration in hardware.
That is an interesting idea as an array or functools module patch.
Basically a way to map or fold arbitrary functions over arrays, with a
few obvious optimizations to avoid refcount churning. It could have
helped with a number of things I've done over the years.
For map, I presume you are thinking of an array.map(func) in system code
(C for CPython) equivalent to
def map(self,func):
for i,ob in enumerate(self):
self[i] = func(ob)
The question is whether it would be enough faster. Of course, what would
really be needed for speed are wrapped system-coded funcs that map would
recognize and pass and received unboxed array units to and from. At that
point, we just about invented 1-D numpy ;-).
I have always thought the array was underutilized, but I see now that it
only offers Python code space saving at a cost of interconversion time.
To be really useful, arrays of unboxed data, like strings and bytes,
need system-coded functions that directly operate on the unboxed data,
like strings and bytes have. Array comes with a few, but very few,
generic sequence methods, like .count(x) (a special-case of reduction).
After posting this, I realized that ctypes makes it easy to find and
wrap functions in a shared library as a Python object (possibly with
parameter annotations) that could be passed to array.map, etc. No
swigging needed, which is harder than writing simple C functions. So a
small extension to array with .map, .filter, .reduce, and a wrapper
class would be more useful than I thought.
--
Terry Jan Reedy
--
http://mail.python.org/mailman/listinfo/python-list