Kyle Stanley <aeros...@gmail.com> added the comment:

So, I think a potential approach to this issue with ProcessPoolExecutor would 
be making it a bit more similar to ThreadPoolExecutor: instead of spawning 
*max_workers* processes on startup (see `_adjust_process_count()`), spawn each 
new process in submit() (see `_adjust_thread_count()`), and only do so when 
there are no idle processes (and the current count is below max_workers, of 
course).

This wouldn't reap the idle processes, but I think re-using them is a behavior 
we want to retain. It seems like it would be quite costly to constantly close 
and start new processes each time a work item is completed.

>From my perspective, the main issue is that the processes are being spawned 
>all at once instead of being spawned as needed. This can result in a 
>substantial amount of extra cumulative idle time throughout the lifespan of 
>the ProcessPoolExecutor.

I should have some time to work on this in the next month or sooner, so I'll 
assign myself to this issue.

(I also changed the version to just Python 3.9, as this seems like too 
significant of a behavioral change to backport to 3.8. Let me know if anyone 
disagrees.)

----------
assignee:  -> aeros
stage:  -> needs patch
versions: +Python 3.9 -Python 3.8

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue39207>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to