On Thu, Aug 11, 2016 at 3:06 PM, Paul Rubin <no.email@nospam.invalid> wrote: > The basic reason to do it is it lets you serve a lot of concurrent i/o > channels (network connections, say) without using threads. If you want > to read a packet, you launch a non-blocking read that returns > immediately, and then transfer control (often through some > behind-the-scenes magic) to an event loop that dispatches back to you > after your read request completes, through either a callback or a > coroutine jump depending on which framework you're using. In Python > that gets you much higher performance than threads, plus you avoid the > usual bunch of hazards that parents used to scare their kids with about > multi-threaded programming. >
Hmm. I'm not sure about that last bit. In order to properly support asynchronous I/O and the concurrency and reentrancy that that implies, you basically need all the same disciplines that you would for threaded programming - no shared mutable state, keep your locals local, etc. The only difference is that context switches are coarser than they are in threading (and remember, CPython already refuses to switch threads in the middle of one bytecode operation, so most of the worst horrors can't happen even there). But maybe I'm too comfortable with threads. It's entirely possible; my first taste of reentrancy was interrupt handling in real mode 80x86 assembly language, and in comparison to that, threads are pretty benign! ChrisA -- https://mail.python.org/mailman/listinfo/python-list