Médéric Boquien added the comment:
Thanks for the explanations Charles-François. I guess the new API would not be
before 3.5 at least. Is there still a chance to integrate my patch (or any
other) to improve the situation for the 3.4 series though
Médéric Boquien added the comment:
"the process will get killed when first writing to the page in case of memory
pressure."
According to the documentation, the returned shared array is zeroed.
https://docs.python.org/3.4/library/multiprocessing.html#module-multiprocessing.shared
Médéric Boquien added the comment:
If I remember correctly the problem is that some OS like linux (and probably
others) do not really allocate space until something is written. If that's the
case then the process may get killed later on when it writes something in the
array.
Here is a
Médéric Boquien added the comment:
I have now signed the contributor's agreement.
As for the unit test I was looking at it. However, I was wondering how to write
a test that would have triggered the problem. It only shows up for very large
arrays and it depends on occupied memory an
Médéric Boquien added the comment:
New update of the patch following Antoine Pitrou's comments. PEP8 does not
complain anymore.
--
Added file: http://bugs.python.org/file34687/shared_array.diff
___
Python tracker
<http://bugs.python.org/is
Changes by Médéric Boquien :
Removed file: http://bugs.python.org/file34685/shared_array.diff
___
Python tracker
<http://bugs.python.org/issue21116>
___
___
Python-bug
Médéric Boquien added the comment:
Updated the patch not to create a uselessly large array if the size is small
than the block size.
--
Added file: http://bugs.python.org/file34686/shared_array.diff
___
Python tracker
<http://bugs.python.
New submission from Médéric Boquien:
It is currently impossible to create multiprocessing shared arrays larger than
50% of memory size under linux (and I assume other unices). A simple test case
would be the following:
from multiprocessing.sharedctypes import RawArray
import ctypes
foo