On Oct 27, 4:05 am, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote: > Andy O'Meara wrote:
> > > Well, when you're talking about large, intricate data structures > > (which include opaque OS object refs that use process-associated > > allocators), even a shared memory region between the child process and > > the parent can't do the job. Otherwise, please describe in detail how > > I'd get an opaque OS object (e.g. an OS ref that refers to memory- > > resident video) from the child process back to the parent process. > > WHAT PARENT PROCESS? "In the same address space", to me, means > "a single process only, not multiple processes, and no parent process > anywhere". If you have just multiple threads, the notion of passing > data from a "child process" back to the "parent process" is > meaningless. I know... I was just responding to you and others here keep beating the "fork" drum. I just trying make it clear that a shared address space is the only way to go. Ok, good, so we're in agreement that threads is the only way to deal with the "intricate and complex" data set issue in a performance-centric application. > > > Again, the big picture that I'm trying to plant here is that there > > really is a serious need for truly independent interpreters/contexts > > in a shared address space. > > I understand that this is your mission in this thread. However, why > is that your problem? Why can't you just use the existing (limited) > multiple-interpreters machinery, and solve your problems with that? Because then we're back into the GIL not permitting threads efficient core use on CPU bound scripts running on other threads (when they otherwise could). Just so we're on the same page, "when they otherwise could" is relevant here because that's the important given: that each interpreter ("context") truly never has any context with others. An example would be python scripts that generate video programatically using an initial set of params and use an in-house C module to construct frame (which in turn make and modify python C objects that wrap to intricate codec related data structures). Suppose you wanted to render 3 of these at the same time, one on each thread (3 threads). With the GIL in place, these threads can't anywhere close to their potential. Your response thus far is that the C module should release the GIL before it commences its heavy lifting. Well, the problem is that if during its heavy lifting it needs to call back into its interpreter. It's turns out that this isn't an exotic case at all: there's a *ton* of utility gained by making calls back into the interpreter. The best example is that since code more easily maintained in python than in C, a lot of the module "utility" code is likely to be in python. Unsurprisingly, this is the situation myself and many others are in: where we want to subsequently use the interpreter within the C module (so, as I understand it, the proposal to have the C module release the GIL unfortunately doesn't work as a general solution). > > > For most > > industry-caliber packages, the expectation and convention (unless > > documented otherwise) is that the app can make as many contexts as its > > wants in whatever threads it wants because the convention is that the > > app is must (a) never use one context's objects in another context, > > and (b) never use a context at the same time from more than one > > thread. That's all I'm really trying to look at here. > > And that's indeed the case for Python, too. The app can make as many > subinterpreters as it wants to, and it must not pass objects from one > subinterpreter to another one, nor should it use a single interpreter > from more than one thread (although that is actually supported by > Python - but it surely won't hurt if you restrict yourself to a single > thread per interpreter). > I'm not following you there... I thought we're all in agreement that the existing C modules are FAR from being reentrant, regularly making use of static/global objects. The point I had made before is that other industry-caliber packages specifically don't have restrictions in *any* way. I appreciate your arguments these a PyC concept is a lot of work with some careful design work, but let's not kill the discussion just because of that. The fact remains that the video encoding scenario described above is a pretty reasonable situation, and as more people are commenting in this thread, there's an increasing need to offer apps more flexibility when it comes to multi-threaded use. Andy -- http://mail.python.org/mailman/listinfo/python-list