On Sat, Feb 17, 2018 at 12:54 AM, Chris Angelico <ros...@gmail.com> wrote: > On Sat, Feb 17, 2018 at 5:25 PM, boB Stepp <robertvst...@gmail.com> wrote:
>> >> I am curious as to what efforts have been attempted to remove the GIL >> and what tradeoffs resulted and why? Is there a single article >> somewhere that collates this information? I fear if I try to Google >> this I will get a lot of scattered pieces that I will have to wade >> through to get to what I want to know, where you (or someone else) >> might be able to point me to a good link. Or kindly summarize >> yourself the relevant information. >> >> Thanks! > > No, there isn't a single article, at least not that I know of. A good > word to search for is "gilectomy", which brought me to this talk by > Larry Hastings: > > https://www.youtube.com/watch?v=pLqv11ScGsQ Thanks for the link. I'll give it a look tomorrow. Must get some sleep soon! Meanwhile I just finished reading "Efficiently Exploiting Multiple Cores with Python" (from Nick Coghlan's Python Notes) at http://python-notes.curiousefficiency.org/en/latest/python3/multicore_python.html It answered some of my questions and has a lot of good information. It appears that a "gilectomy" as you put it is a really tough problem. It looks to me that it might require a major backwards-compatibility-breaking change to CPython to implement if I am even close to understanding the issues involved. > Broadly speaking, what happens is that removing a large-scale lock > (the GIL) requires using a whole lot of small-scale locks. That gives > finer granularity, but it also adds a whole lot of overhead; the CPU > features required for implementing those locks are not fast. With the > GIL, you claim it, and you can do what you like. Without the GIL, you > have to claim a lock on each object you manipulate, or something along > those lines. (Different attempts have gone for different forms of > granularity, so I can't generalize too much here.) That means claiming > and relinquishing a lot more locks, which in turn means a lot more > CPU-level "lock" primitives. That's a lot of overhead. Thanks for this explanation. It really helps me a lot! > One of the best ways to multi-thread CPU-intensive work is to push a > lot of the work into an extension library. Take a function like this: > > def frobnosticate(stuff): > magic magic magic > magic magic more magic > return result > > As long as the returned object is a newly-created one and the > parameter is not mutated in any way, you can do all the work in the > middle without holding the GIL. That can't be done in pure Python, but > in a C function, it certainly can. You still have the coarse-grained > locking of the GIL, you still have all the protection, but you can now > have two threads frobnosticating different stuff at the same time. Yeah, Coghlan's article mentioned this. I guess using Cython would be one approach to this. Thanks, Chris! -- boB -- https://mail.python.org/mailman/listinfo/python-list