On Mon, Mar 28, 2011 at 4:55 AM, Dave Angel <da...@ieee.org> wrote:
> I'd expect it to be very slow.  I presume it not only has to visit and
> duplicate every bit of the data structure, but also has to detect loops, and
> avoid infinite loops recreating them.
>
> This loop detection is probably an n-squared algorithm, so it grows much
> faster than the number of items being deep-copied.

I'm not certain how it's implemented, but I would presume that it just
builds a dict cache of the copied objects, keyed by the original
object ids, to prevent recurring into objects that have already been
copied.  Adding an entry and checking for existence are both O(1)
average-case, so it should just be O(n) overall.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to