Lasse Vågsæther Karlsen wrote: > If I have a generator or other iterable producing a vast number of > items, and use it like this: > > s = [k for k in iterable] > > if I know beforehand how many items iterable would possibly yield, would > a construct like this be faster and "use" less memory? > > s = [0] * len(iterable) > for i in xrange(len(iterable)): > s[i] = iterable.next()
You can easily answer the speed aspect of your question using the timeit module: ~ $ python2.4 -m timeit -s'iterable=range(1000)' '[k for k in iterable]' 10000 loops, best of 3: 111 usec per loop ~ $ python2.4 -m timeit -s'iterable=range(1000)' 's = [0]*len(iterable); it = iter(iterable)' 'for i in xrange(len(iterable)): s[i] = it.next()' 1000 loops, best of 3: 513 usec per loop ~ $ python2.4 -m timeit -s'iterable=range(1000)' 's = [0]*len(iterable)' 'for i, v in enumerate(iterable): s[i] = v' 1000 loops, best of 3: 269 usec per loop ~ $ python2.4 -m timeit -s'iterable=range(1000)' 'list(iterable)' 100000 loops, best of 3: 7.33 usec per loop Peter -- http://mail.python.org/mailman/listinfo/python-list