Dr Mephesto <[EMAIL PROTECTED]> writes: > I would like to create a pretty big list of lists; a list 3,000,000 > long, each entry containing 5 empty lists. My application will > append data each of the 5 sublists, so they will be of varying > lengths (so no arrays!). > > Does anyone know the most efficient way to do this? I have tried: > > list = [[[],[],[],[],[]] for _ in xrange(3000000)]
You might want to use a tuple as the container for the lower-level lists -- it's more compact and costs less allocation-wise. But the real problem is not list allocation vs tuple allocation, nor is it looping in Python; surprisingly, it's the GC. Notice this: $ python Python 2.5.1 (r251:54863, May 2 2007, 16:56:35) [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> t0=time.time(); l=[([],[],[],[],[]) for _ in xrange(3000000)]; >>> t1=time.time() >>> t1-t0 143.89971613883972 Now, with the GC disabled: $ python Python 2.5.1 (r251:54863, May 2 2007, 16:56:35) [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gc >>> gc.disable() >>> import time >>> t0=time.time(); l=[([],[],[],[],[]) for _ in xrange(3000000)]; >>> t1=time.time() >>> t1-t0 2.9048631191253662 The speed difference is staggering, almost 50-fold. I suspect GC degrades the (amortized) linear-time list building into quadratic time. Since you allocate all the small lists, the GC gets invoked every 700 or so allocations, and has to visit more and more objects in each pass. I'm not sure if this can be fixed (shouldn't the generational GC only have to visit the freshly created objects rather than all of them?), but it has been noticed on this group before. If you're building large data structures and don't need to reclaim cyclical references, I suggest turning GC off, at least during construction. -- http://mail.python.org/mailman/listinfo/python-list