Jean-Michel Pichavant wrote: > I'm currently writing python code that writes a small binary file to be > used by another device which code is written in C. The python code runs on > a little endian CPU, and unfortunately, the other device is using a big > endian MIPS. > > My problem is the following: I cannot make an array of n int work in big > endian, I can workaround the problem by creating n intergers, which is > nicely done thanks to list comprehension but that seems to me like a > misuse of the ctypes modules. > > ctypes is expecting a 'c_uint_be_Array_5' (note the _be_ for big endian), > and I don't know how to construct such object. > > I hope I am clear.
Yes, very clear, thanks for the effort, although the nitpicker in me has to mention that an off-by-one bug slipped into your code ;) > class Foo_be_working(ctypes.BigEndianStructure): > """Working big endian version, looks more like a workaround.""" > _fields_ = [('bar%i'%i, ctypes.c_uint32) for i in range(5)] [...] > f_be = Foo_be_working(0,1,2,3,4) > print "bar0 and bar5: ", f_be.bar0, f_be.bar5 I'm not very familiar with ctypes, but a random walk through the source found the _endian module and the __ctypes_be__ attribute. So > > iarray = ctypes.c_uint32*5 # array of 5 unsigned 32 bits > > class Foo(ctypes.Structure): > """Native byte order, won't work on my big endian device""" > _fields_ = [('bar', iarray)] > > class Foo_be(ctypes.BigEndianStructure): > """big endian version, but python code fails""" > _fields_ = [('bar', iarray)] becomes $ cat be2.py import ctypes, sys iarray_be = ctypes.c_uint32.__ctype_be__*5 class Foo_be(ctypes.BigEndianStructure): _fields_ = [('bar', iarray_be)] print sys.version f_be = Foo_be((0,1,2,3,0x11223344)) print hex(f_be.bar[4]) $ python be2.py 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] 0x11223344L which might do what you want. -- https://mail.python.org/mailman/listinfo/python-list