Ian Beaver added the comment:

Its not multi-dimensional slicing  to get a subset of objects as in Numpy, but 
more the ability to slice a buffer containing a multi-dimensional array as raw 
bytes.  Buffer objects in Python2.7 are dimensionality naive so it works fine.  
You were correct that I was testing against Python3.2, in Python3.3 the slicing 
of ndim > 1 works, however only for reading from the buffer.  I still can't 
write back into a memoryview object with ndim > 1 in Python 3.3.

Python 2.7.3:
>>> import numpy as np
>>> arr = np.zeros(shape=(100,100))
>>> type(arr.data)
<type 'buffer'>
>>> arr.data[0:10]
'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
>>> 

Python 3.2.3:
>>> import numpy as np
>>> arr = np.zeros(shape=(100,100))
>>> type(arr.data)
<class 'memoryview'>
>>> arr.data[0:10]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NotImplementedError
>>> 

Python 3.3.3:
>>> import numpy as np
>>> arr = np.zeros(shape=(100,100))
>>> type(arr.data)
<class 'memoryview'>
>>> arr.data[0:10]
<memory at 0x7faaf1d03a48>
>>> 


However to write data back into a buffer:

Python 2.7.3:
>>> import numpy as np
>>> arr = np.zeros(shape=(100,100))
>>> arr.data[0:10] = '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
>>> 

Python 3.2.3:
>>> import numpy as np
>>> arr = np.zeros(shape=(100,100))
>>> arr.data[0:10] = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NotImplementedError
>>> 

Python 3.3.3:
>>> import numpy as np
>>> arr = np.zeros(shape=(100,100))
>>> arr.data[0:10] = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NotImplementedError: memoryview assignments are currently restricted to ndim = 1
>>> 


Also the slice in Python3.3 is not the same as just returning a chunk of raw 
bytes from the memory buffer, instead of a bytes object the indexing behaves 
similar to numpy array indexes and you get the (sub) array items back as Python 
objects.

Python2.7.3:
>>> import numpy as np
>>> arr = np.zeros(shape=(100,100))
>>> arr.data[0:10]
'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
len(bytes(arr.data[0:10]))
10

Python3.3.3:
>>> import numpy as np
>>> arr = np.zeros(shape=(100,100))
>>> arr.data[0:10]
<memory at 0x7f109a71ea48>
>>> len(bytes(arr.data[0:10]))
8000

This is not a big deal in my case since I already have numpy arrays I can just 
use bytes(arr.flat[start:end]) to scan through the array contents as byte 
chunks, but that would not be possible with just a memoryview object like it 
was with the Python2 buffer object without converting it to something else or 
dropping to ctypes and iterating over the memory addresses and dereferencing 
the contents.

So in Python3.3 its halfway to the functionality in Python2.7, I can send 
chunks of the data through a compressed or encrypted stream, but I can't 
rebuild the data on the other side without first creating a bytearray and 
eating the cost of a copy into a memoryview.  All I really need is a way to 
reconstruct the original memoryview buffer in memory from a stream of bytes 
without having to make a temporary object first and then copy its contents into 
the final memoryview object when it is complete.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue14130>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to