Josh Rosenberg added the comment:

The docs ( 
https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.as_completed
 ) do seem to indicate it shouldn't do so as long as results were available 
before the timeout expired:

"The returned iterator raises a concurrent.futures.TimeoutError if __next__() 
is called and the result isn’t available after timeout seconds from the 
original call to as_completed()."

My reading of that would be that it raises the error only when:

1. The timeout has expired
2. The call would block (or possibly, would have blocked after the timeout 
expired), indicating no result was available

Handling "would have blocked" is hard, but might it make sense to still allow a 
non-blocking wait on the event even if the timeout has expired, with the 
exception raised only if the non-blocking wait fails?

Side-note: Looks like this code is still using time.time, not time.monotonic, 
so it's vulnerable to system clock adjustments; NTP updates could cause a five 
second timeout to expire instantly, or take seconds or even minutes longer to 
expire.

----------
nosy: +josh.r

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29733>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to