Just a shorter implementation: from itertools import groupby def split(lst, func): gs = groupby(lst, func) return list(gs[True]), list(gs[False])
"Lie Ryan" <lie.1...@gmail.com> дÈëÏûÏ¢ÐÂÎÅ:nfi3m.2341$ze1.1...@news-server.bigpond.net.au... > Brad wrote: >> On Jul 2, 9:40 pm, "Pablo Torres N." <tn.pa...@gmail.com> wrote: >>> If it is speed that we are after, it's my understanding that map and >>> filter are faster than iterating with the for statement (and also >>> faster than list comprehensions). So here is a rewrite: >>> >>> def split(seq, func=bool): >>> t = filter(func, seq) >>> f = filter(lambda x: not func(x), seq) >>> return list(t), list(f) >>> >> >> In my simple tests, that takes 1.8x as long as the original solution. >> Better than the itertools solution, when "func" is short and fast. I >> think the solution here would worse if func was more complex. >> >> Either way, what I am still wondering is if people would find a built- >> in implementation useful? >> >> -Brad > > A built-in/itertools should always try to provide the general solution > to be as useful as possible, something like this: > > def group(seq, func=bool): > ret = {} > for item in seq: > fitem = func(item) > try: > ret[fitem].append(item) > except KeyError: > ret[fitem] = [item] > return ret > > definitely won't be faster, but it is a much more general solution. > Basically, the function allows you to group sequences based on the > result of func(item). It is similar to itertools.groupby() except that > this also group non-contiguous items.
-- http://mail.python.org/mailman/listinfo/python-list