On 9/4/2010 11:51 PM, Paul Rubin wrote:
John Nagle<na...@animats.com>  writes:
     Unoptimized reference counting, which is what CPython does, isn't
all that great either.  The four big bottlenecks in Python are boxed
numbers, attribute lookups, reference count updates, and the GIL.

The performance hit of having to lock the refcounts before update has
been the historical reason for keeping the GIL.  The LOCK prefix takes
something like 100 cycles on an x86.  Is optimizing the refcount updates
going to anywhere near make up for that?

    I've argued for an approach in which only synchronized or immutable
objects can be shared between threads.  Then, only synchronized objects
have refcounts.  See
"http://www.animats.com/papers/languages/pythonconcurrency.html";

    Guido doesn't like it.  He doesn't like any "restrictions".
So we're stuck dragging around the boat anchor.

    I'd hoped that the Unladen Swallow people might come up with some
really clever solution, but they seem to be stuck.  It's been almost
a year since the last quarterly release.  Maybe Google is putting their
effort into Go.

    What's so striking is that Shed Skin can deliver 20x to 60x
speedups over CPython, while PyPy and Unladen Swallow have
trouble getting 1.5x.  The question is how much one has to
restrict the language to get a serious performance improvement.

Python's "with" statement as an approach to RAII has seemed ok to me.  I
can't think of a time when I've really had to use a finalizer for
something with dynamic extent.  They've always seemed like a code smell
to me.

   The problem appears when you have an object that owns something, like
a window or a database connection.  "With" is single-level.

                                John Nagle

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to