On Thu, 29 Sep 2016 11:38 am, Tim Chase wrote: > This seems to discard the data's origin (data1/data2/data3) which is > how I determine whether to use process_a(), process_b(), or > process_c() in my original example where N iterators were returned, > one for each input iterator.
So add another stage to your generator pipeline, one which adds a unique ID to the output of each generator so you know where it came from. Hint: the ID doesn't have to be an ID *number*. It can be the process_a, process_b, process_c ... function itself. Then instead of doing: for key, (id, stuff) in groupby(merge(data1, data2, data3), keyfunc): for x in stuff: if id == 1: process_a(key, *x) elif id == 2: process_b(key, *x) elif ... or even: DISPATCH = {1: process_a, 2: process_b, ...} for key, (id, stuff) in groupby(merge(data1, data2, data3), keyfunc): for x in stuff: DISPATCH[id](key, *x) you can do: for key, (process, stuff) in groupby(merge(data1, data2, data3), keyfunc): for x in stuff: process(key, *x) -- Steve “Cheer up,” they said, “things could be worse.” So I cheered up, and sure enough, things got worse. -- https://mail.python.org/mailman/listinfo/python-list