Quoth Dave Brueck <[EMAIL PROTECTED]>: ... | Another related benefit is that a lot of application state is implicitly and | automatically managed by your local variables when the task is running in a | separate thread, whereas other approaches often end up forcing you to think in | terms of a state machine when you don't really care* and as a by-product you | have to [semi-]manually track the state and state transitions - for some | problems this is fine, for others it's downright tedious.
I don't know if the current Stackless implementation has regained any of this ground, but at least of historical interest here, the old one's ability to interrupt, store and resume a computation could be used to As you may know, it used to be, in Stackless Python, that you could have both. Your function would suspend itself, the select loop would resume it, for something like serialized threads. (The newer version of Stackless lost this continuation feature, but for all I know there may be new features that regain some of that ground.) I put that together with real OS threads once, where the I/O loop was a message queue instead of select. A message queueing multi-threaded architecture can end up just as much a state transition game. I like threads when they're used in this way, as application components that manage some device-like thing like a socket or a graphic user interface window, interacting through messages. Even then, though, there tend to be a lot of undefined behaviors in events like termination of the main thread, receipt of signals, etc. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list