Kunal <bhalla.ku...@gmail.com> added the comment:

Oh, it looks like it's because the context for the Event doesn't match when 
created this way (the default context on linux is fork). 

The problem goes away if the Event instance is created from the spawn context 
-- specifically, patching dump_core.py 

```
@@ -22,14 +22,13 @@ def master_func(space:dict) -> None:

 if __name__ == "__main__":

-    this_event = multiprocessing.Event()
+    context_spawn = multiprocessing.get_context("spawn")

+    this_event = context_spawn.Event()
     this_space = dict(
         this_event=this_event,
     )

-    context_spawn = multiprocessing.get_context("spawn")
-
     master_proc = context_spawn.Process(
```

I think this can be closed; the incompatibility between contexts is also 
documented at 
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Lock, 
thought it could be a little more explicit about potential segfaults.

> Note that objects related to one context may not be compatible with processes 
> for a different context. In pait's berticular, locks created using the fork 
> context cannot be passed to processes started using the spawn or forkserver 
> start methods.

(Event uses a Lock under the hood)

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue43832>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to