In <mailman.44.1257626420.2873.python-l...@python.org> Luis Alberto Zarrabeitia Gomez <ky...@uh.cu> writes:
>Quoting Andre Engels <andreeng...@gmail.com>: >> On Sat, Nov 7, 2009 at 8:25 PM, Luis Alberto Zarrabeitia Gomez >> <ky...@uh.cu> wrote: >> > >> > Ok, he has a dict. >> > >> > Now what? He needs a non-sparse array. >> >> Let d be your dict. >> >> Call the zeroeth place in your array d[0], the first d[1], the 10000th >> d[100000]. >Following that reasoning, we could get rid of lists and arrays altogether. >Here's why that wouldn't work: >for x,y in zip(d,other): > ... do something ... >Yes, we could also ignore zip and just use range/xrange to iterate for the >indices... >Lists and dictionaries have different semantics. One thing is to argue that you >shouldn't be thinking on pre-growing a list for performance reasons before >being >sure that it is a bottleneck, and a very different one is to argue that because >one operation (__setitem__) is the same with both structures, we should not use >lists for what may need, depending on the problem, list semantics. >¿Have you ever tried to read list/matrix that you know it is not sparse, but >you >don't know the size, and it may not be in order? A "grow-able" array would just >be the right thing to use - currently I have to settle with either hacking >together my own grow-able array, or preloading the data into a dict, growing a >list with the [0]*size trick, and updating that list. Not hard, not worthy of a >PEP, but certainly not so easy to dismiss. Thanks. Well said. Saludos, kynn
-- http://mail.python.org/mailman/listinfo/python-list