On Jan 3, 2020, at 10:11, Miguel Ángel Prosper <[email protected]>
wrote:
>
>
>>
>> Having a way to clear the queue and then shutdown once existing jobs are
>> done is a lot
>> more manageable.
> ...
>> So the only clean way to do this is cooperative: flush the queue, send some
>> kind of
>> message to all children telling them to finish as quickly as possible, then
>> wait for them
>> to finish.
>
> I was personally thinking of an implementation like that, cancel all still in
> pending and if wait is true the wait for the ones running, for both
> implementations.
OK, that makes sense. And it seems like it should be implementable; the only
hard part is identifying all the edge cases and verifying they all do the right
thing for both threads and processes.
But I don’t think “terminate” is the right name. Maybe “cancel”? Or even
“shutdown(wait=whatever, cancel=True)?”
I think Java inspired this library, so maybe it’s worth looking at what they
do. But IIRC, it’s a much more complicated API in general, and for this case
you’d actually have to do something like this:
x.shutdown() # stop new submissions
x.cancelAll() # cancel all tasks still in the queue
x.purge() # remove and handle all canceled tasks
x.join() # wait for already-started tasks to finish
… which probably isn’t what we want.
> I didn't actually meant terminate literally, I just called it that as that's
> what multiprocessing.dummy.Pool.terminate (+ join after) does.
IIRC, it only does that by accident, because dummy.Process.terminate is a
no-op, and that isn’t documented but just happens to be what CPython does.
_______________________________________________
Python-ideas mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at
https://mail.python.org/archives/list/[email protected]/message/LZEKK6EWMPMBNGJSVZ4YGGAW5Y6OTV6G/
Code of Conduct: http://python.org/psf/codeofconduct/