Nick Craig-Wood wrote: [GIL] > That is certainly true. However the point being is that running > on 2 CPUs at once at 95% efficiency is much better than running on > only 1 at 99%...
How do you define this percent efficiency? >>> The truth is that the future (and present reality) of almost >>> every form of computing is multi-core, >> >> Is it? 8) > > Intel, AMD and Sun would have you believe that yes! Strange, in my programs, I don't need any "real" concurrency (they are network servers and scripts). Or do you mean "the future of computing hardware is multi-core"? That indeed may be true. >> The question is: If it really was, how much of useful >> performance gain would you get? > > The linux kernel has been through these growing pains already... > SMP support was initially done with the Big Kernel Lock (BKL) > which is exactly equivalent to the GIL. So, how much performance gain would you get? Again, managing fine-grained locking can be much more work than one simple lock. > The linux kernel has moved onwards to finer and finer grained > locking. How do you compare a byte code interpreter to a monolithic OS kernel? > I'd like to see a python build as it is at the moment and a > python-mt build which has the GIL broken down into a lock on each > object. python-mt would certainly be slower for non threaded > tasks, but it would certainly be quicker for threaded tasks on > multiple CPU computers. >From where do you take this certainty? For example, if the program in question involves mostly IO access, there will be virtually no gain. Multithreading is not Performance. > The user could then choose which python to run. > > This would of course make C extensions more complicated... Also, C extensions can release the GIL for long-running computations. Regards, Björn -- BOFH excuse #391: We already sent around a notice about that. -- http://mail.python.org/mailman/listinfo/python-list