Python's approach with the GIL is both reasonable and disappointing.
Reasonable because I understand how it can make things easier for its
internals. Disappointing because it means that standard python cannot
take advantage of the parallelism that can more and more often be
afforded by today's computers. I.e. I found only recently, almost by
chance, that my wife's laptop has not one but two processors, even
though it isn't a particularly high-end computer. I now understand
that OS-level threading does use them both, but I understand that the
GIL effectively prevents parallel operations. (Am I understanding
correctly?)
Not entirely. Yes, if your application is CPU-bound. No if it's
IO-bound. And a lot of people think that threads are actually the wrong
approach for concurrency anyway, so with python2.6 there comes the
multiprocessing-module that lets you use the full capacity of your CPUs.
I do not completely understand your statement in the context of my
original example though, the shared dictionary. As the GIL is released
every X bytecode operations surely it can happen that as the
dictionary is iterated through, i.e. in a for/in loop, a different
thread might change it, with potentially catastrophic consequences.
The GIL wouldn't be able to prevent this, wouldn't it?
You didn't give a concrete usage scenario for your shared dict - but I
assumed that by reading and writing you meant
mydict[key] = value
value = mydict[key]
which are both atomic through the GIL.
More complex operations - such as iteration - might need more coarse
grained locking.
Diez
--
http://mail.python.org/mailman/listinfo/python-list