Actually, they are different. Put a dict.{iter}items() in an O(k^N) algorithm and make it a hundred thousand entries, and you will feel the difference. Dict uses hashing to get a value from the dict and this is why it's O(1).
10.08.2012, в 1:21, Tim Chase написал(а): > On 08/09/12 15:41, Roman Vashkevich wrote: >> 10.08.2012, в 0:35, Tim Chase написал(а): >>> On 08/09/12 15:22, Roman Vashkevich wrote: >>>>> {(4, 5): 1, (5, 4): 1, (4, 4): 2, (2, 3): 1, (4, 3): 2} >>>>> and i want to print to a file without the brackets comas and semicolon in >>>>> order to obtain something like this? >>>>> 4 5 1 >>>>> 5 4 1 >>>>> 4 4 2 >>>>> 2 3 1 >>>>> 4 3 2 >>>> >>>> for key in dict: >>>> print key[0], key[1], dict[key] >>> >>> This might read more cleanly with tuple unpacking: >>> >>> for (edge1, edge2), cost in d.iteritems(): # or .items() >>> print edge1, edge2, cost >>> >>> (I'm making the assumption that this is a edge/cost graph...use >>> appropriate names according to what they actually mean) >> >> dict.items() is a list - linear access time whereas with 'for >> key in dict:' access time is constant: >> http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#use-in-where-possible-1 > > That link doesn't actually discuss dict.{iter}items() > > Both are O(N) because you have to touch each item in the dict--you > can't iterate over N entries in less than O(N) time. For small > data-sets, building the list and then iterating over it may be > faster faster; for larger data-sets, the cost of building the list > overshadows the (minor) overhead of a generator. Either way, the > iterate-and-fetch-the-associated-value of .items() & .iteritems() > can (should?) be optimized in Python's internals to the point I > wouldn't think twice about using the more readable version. > > -tkc > > -- http://mail.python.org/mailman/listinfo/python-list