Serhiy Storchaka added the comment:

Python interpreter itself takes 20-30 MB of memory. Using it on a machine that 
has no few tens of free memory doesn't make much sense. MicroPython can be an 
exception, but it has special modules for low-level programming.

If you need to proceed so large bytearray that creating yet one copy of it is 
not possible, it can be translated by separate elements or smaller chunks.

$ ./python -m perf timeit -s "table = bytes(range(256)).swapcase(); data = 
bytearray(range(256))*1000" -- "for i in range(len(data)): data[i] = 
table[data[i]]"
Median +- std dev: 205 ms +- 10 ms
$ ./python -m perf timeit -s "table = bytes(range(256)).swapcase(); data = 
bytearray(range(256))*1000" -- "for i in range(0, len(data), 100): 
data[i:i+100] = data[i:i+100].translate(table)"
Median +- std dev: 12.9 ms +- 0.6 ms
$ ./python -m perf timeit -s "table = bytes(range(256)).swapcase(); data = 
bytearray(range(256))*1000" -- "for i in range(0, len(data), 1000): 
data[i:i+1000] = data[i:i+1000].translate(table)"
Median +- std dev: 3.12 ms +- 0.22 ms
$ ./python -m perf timeit -s "table = bytes(range(256)).swapcase(); data = 
bytearray(range(256))*1000" -- "for i in range(0, len(data), 10000): 
data[i:i+10000] = data[i:i+10000].translate(table)"
Median +- std dev: 1.79 ms +- 0.14 ms

Translating by chunks also works with data that can't be fit in memory at all. 
You can mmap large file or read and write it by blocks. In-place bytearray 
method wouldn't help in this case.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue17301>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to