[issue46888] SharedMemory.close() destroys memory

2022-03-01 Thread Ronny Rentner


New submission from Ronny Rentner :

According to 
https://docs.python.org/3/library/multiprocessing.shared_memory.html#multiprocessing.shared_memory.SharedMemory.close
 if I call close() on a shared memory, it shall not be destroyed.

Unfortunately this is only true for Linux but not for Windows.

I've tested this in a Windows VM on VirtualBox like this:

```
Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit 
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import multiprocessing.shared_memory
>>> creator = multiprocessing.shared_memory.SharedMemory(create=True, 
>>> name='mymemory', size=1)
>>> creator.buf[0] = 1
>>> creator.buf[0]
1
>>> # According to  close() is supposed to not destroy 'mymemory' but it does 
>>> destroy it.
>>> creator.close()
>>>
>>> user = multiprocessing.shared_memory.SharedMemory(name='mymemory')
Traceback (most recent call last):
  File "", line 1, in 
  File 
"C:\Users\User\AppData\Local\Programs\Python\Python310\lib\multiprocessing\shared_memory.py",
 line 161, in __init__
h_map = _winapi.OpenFileMapping(
FileNotFoundError: [WinError 2] The system cannot find the file specified: 
'mymemory'
>>> # Shared memory was destroyed by close()
```

--
components: Windows
messages: 414258
nosy: paul.moore, ronny-rentner, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: SharedMemory.close() destroys memory
type: behavior
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue46888>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46888] SharedMemory.close() destroys memory

2022-03-01 Thread Ronny Rentner


Ronny Rentner  added the comment:

Thanks for your quick response.

My bigger scope is real time audio and video processing where I use multiple 
processes that need to be synchronized. I use shared memory for that.

As a small spin off, I've hacked together a dict that is using shared memory as 
a storage.

It works like this: It uses one shared memory space for streaming updates. This 
is efficient because only changes are transferred. Once the streaming shared 
memory buffer is full or if any single update to the dict is larger than the 
streaming buffer, it creates a full dump of the whole dict in a new shared 
memory that is just as big as needed. Any user of the dict would then consume 
the full dump.

On Linux that works great. Any user of the dict can create a full dump in a new 
shared memory and all other users of the same dict can consume it.

On Windows, the issue is if the creator process of the full dump goes away, the 
shared memory goes away. This is in contrast to the Python docs, unfortunately.

I don't fully understand the underlying implementations, but I've been looking 
at https://docs.microsoft.com/en-us/dotnet/standard/io/memory-mapped-files and 
I understand there are 2 main modes.

The persistent mode sounds just like Python shared memory also works on Linux 
(where I can have these /dev/shm/* files even after the Python process ends) 
but I think on Windows, Python is not using the persistent mode and thus the 
shared memory goes away, in contrast to how it works on Linux.

PS: You can find the code for this shared dict here 
https://github.com/ronny-rentner/UltraDict - Please note, it's an eary lack and 
not well tested.

--

___
Python tracker 
<https://bugs.python.org/issue46888>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46888] SharedMemory.close() destroys memory

2022-03-02 Thread Ronny Rentner


Ronny Rentner  added the comment:

Many thanks for your explanation. It makes sense now.

One thing I've learned is that I need to get rid of the resource trackers for 
shared memory, so I've applied the monkey patch fix mentioned in 
https://bugs.python.org/issue38119

The issue is that you need shared memory in a multi processing environment, but 
the resource tracker can hardly know about other processes using the shared 
memory unless you tell it explicitly.

It looks like the resource tracker is guessing that all users of a shared 
memory are somehow forked from the same main process but I'm using spawn and 
not fork because my software should run on Windows as well.

Regarding /dev/shm on Linux: As far as I know there's an option 'RemoveIPC' to 
the systemd daemon that will delete shared memory on logout of the user which 
is turned on by default. I also remember seeing cronjobs for cleaning up 
/dev/shm on Debian, but not sure if that is the current approach anymore.

--

___
Python tracker 
<https://bugs.python.org/issue46888>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com