On Feb 15, 2020, at 05:40, Bar Harel <[email protected]> wrote:
> 
> If we would have had a global ThreadPoolExecutor, we could have use it 
> exactly for that. The threads would be shared, the overhead would only occur 
> once. Users of the executor will know that it's a limited resource that may 
> be full at times, and as responsible programmers will not use it for infinite 
> loops, clogging the whole system.

You can already do this trivially in any application—just create an executor 
and store it as a global in some module, or attach it to some other global 
object like the config or the run loop, or whatever.

Presumably the goal here is that having it come with Python would mean lots of 
third-party libraries would start using it. Similar to GCD (Grand Central 
Dispatch): its default dispatch queues would only be a minor convenience that 
you could trivially build yourself, but the real benefit is that they’re widely 
used by third-party ObjectiveC and Swift libraries because they’ve been there 
from the start and Apple has encouraged their use.

The question is, if we added, say, a 
concurrent.futures.get_shared_thread_pool_executor function today, would people 
change all of the popular libraries to start using it? Probably not, because 
then they’d all have to start requiring Python 3.10. In which case we wouldn’t 
get the benefits.

The solution to that is of course to have a backport on PyPI. But you can 
create that same library today and try to get libraries to start using it, and 
then it could be added to the stdlib once it’s clear there’s a lot of uptake 
and everyone is happy with the API. The only advantage I can see from putting 
it in the stdlib now along with creating that PyPI library is that it might be 
easier to proselytize for it. But I think you need to make the case that it 
really would be easy to proselytize for a stdlib feature with a backport, but 
hard with just a PyPI library. The fact that it wasn’t there from the start 
like GCD’s default queues were means people have already come up with other 
solutions and they might not want a different one.

Plus, you have to propose a specific design and make sure everyone’s happy with 
that, because once it goes into the stdlib, its interface is fixed forever. Do 
people want just a single flat shared executor, or do they want to be able to 
specify different priority/QoS for tasks? (GCD provides five queues to its 
shared thread pool, not just one, so you can make sure your user-initiated 
request doesn’t get blocked by a bunch of bulk background requests.) Do we need 
a shared process pool executor also? Who controls the max thread count? (There 
are Java executors that provide a way for libs to increase it, but not decrease 
it below what the app wanted.) Do asyncio apps really want the same behavior 
from a global shared executor as GUI apps, non-asyncio network apps, games, 
etc.? Do servers and clients want the same behavior? And so on. If you’re only 
building a PyPI library, you can guess at all of this and see what people say, 
but if you add it to the stdlib you have to get it right the first time.

_______________________________________________
Python-ideas mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/2MFFAIVSYFAXDCIZU4CEIW5OW323I3PZ/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to