Simon Forman wrote: > I've got a function that I'd like to improve. > > It takes a list of lists and a "target" element, and it returns the set > of the items in the lists that appear either before or after the target > item. (Actually, it's a generator, and I use the set class outside of > it to collect the unique items, but you get the idea. ;-) ) > > data = [ > ['this', 'string', 'is', 'nice'], > ['this', 'string', 'sucks'], > ['string', 'not', 'good'], > ['what', 'a', 'string'] > ] > > def f(target, ListOfLists): > for N in ListOfLists: > try: > i = N.index(target) > except ValueError: > continue > > # item before target > if i: yield N[i - 1] > > # item after target > try: > yield N[i + 1] > except IndexError: > pass > > print set(n for n in f('string', data)) > > # Prints set(['this', 'not', 'is', 'sucks', 'a']) > > > Now, this works and normally I'd be happy with this and not try to > improve it unless I found that it was a bottleneck. However, in this > case I know that when a try..except statement fails (i.e. executes the > except part of the statement) it's "slow", *and* I know that the real > list of lists will have many lists in it in which the target item does > not appear. > > That means that the first try..except statement will be executing it's > except clause more than half the time, and probably *much* more than > half the time.
Nevermind, it turns out that each list in the list of lists will *always* contain the target item, making this task much less interesting. ~Simon -- http://mail.python.org/mailman/listinfo/python-list