<snip> > Yes. Parallelism certainly deserves attention, and I believe > "amateurs" are likely to help in the breakthroughs to come. I > further suspect, though, that they'll be amateurs who benefit > from knowledge of existing research into the range of documented > concurrency concepts, including CSPs, tasks, guarded methods, > microthreads, weightless threads, chords, co-routines, and so on.
Yes, there are lots of different concepts, even in python, there's pympi (as was mentioned), the standard python thread library, the subprocess library, generators, microthreads and stackless, not to mention Candygram, PyLinda, ATOM, Kamaelia (get to that in a minute), and other things you can search for on the web. My motivation here is just to see if I can find some lowest common denominator, to try to simplify this stuff to the point where the whole concept is a little easier to use, and the plumbing can be hidden away somewhere so "amateurs" don't have to worry about it (too much) if they don't want to. Now to be more specific, there does seem to be a lot of work with generators to set up concurrency, and that's fine, but it does seem like it takes a bunch of scaffolding and a different way of looking at things, and it's not really obvious to me how it can scale up on multiple processor systems with the GIL still in place. Now I'm not sure that this will be the answer to all the problems, but breaking the global address space and making it easy to break up jobs into small communicating chunks seems like it would be a good way to go. Or maybe I'm missing something? Is there anything you'd care to elaborate on? -- http://mail.python.org/mailman/listinfo/python-list