New submission from Yusef Shaban <yus2047...@gmail.com>:

This came up from a supporting library but the actual issue is within 
concurrent.futures.ProcessPool.

Discussion can be found at https://github.com/agronholm/apscheduler/issues/414

ProcessPoolExecutor does not properly spin down and spin up new processes. 
Instead, it simply re-claims existing processes to re-purpose them for new 
jobs. Is there no option or way to make it so that instead of re-claiming 
existing processes, it spins down the process and then spins up another one. 
This behavior is a lot better for garbage collection and will help to prevent 
memory leaks. 

ProcessPoolExecutor also spins up too many processes and ignores the 
max_workers argument. An example is my setting max_workers=10, but I am only 
utilizing 3 processes. One would expect given the documentation that I would 
have at most 4 processes, the main process, and the 3 worker processes. 
Instead, ProcessPoolExecutor spawns all 10 max_workers and lets the other 7 
just sit there, even though they are not necessary.

----------
components: Library (Lib)
messages: 359260
nosy: yus2047889
priority: normal
severity: normal
status: open
title: concurrent.futures.ProcessPoolExecutor does not properly reap jobs and 
spawns too many workers
type: behavior
versions: Python 3.8

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue39207>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to