Steve Stagg <stest...@gmail.com> added the comment:

It's one of those ugly multithreading issues that's really hard to reason about 
unfortunately.

In this case, it's not the size of the loop so much as you've discovered a way 
to make it very likely that the background thread is doing IO (and holding the 
IO lock) during shutdown.

Here's an example that reproduces the abort for me (again, it's multithreading, 
so you may have a different experience) with a smaller range value:

---
import sys, threading

def run():
    for i in range(100):
        sys.stderr.write(' =.= ' * 10000)

if __name__ == '__main__':
    threading.Thread(target=run, daemon=True).start()
---

The problem with daemon threads is that they get killed fairly suddenly and 
without much ability to correct bad state during shutdown, so any fix here 
would likely be around re-visiting the thread termination code as linked in the 
issue above.

There may be a fix possible, but it's going to be a complex thread state 
management fix, not just a limit on loop counts, unfortunately

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue42717>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to