On Thu, 29 Sep 2016 11:38 am, Tim Chase wrote:
> This seems to discard the data's origin (data1/data2/data3) which is
> how I determine whether to use process_a(), process_b(), or
> process_c() in my original example where N iterators were returned,
> one for each input iterator.
So add another s
On 2016-09-29 10:20, Steve D'Aprano wrote:
> On Thu, 29 Sep 2016 05:10 am, Tim Chase wrote:
> > data1 = [ # key, data1
> > (1, "one A"),
> > (1, "one B"),
> > (2, "two"),
> > (5, "five"),
> > ]
>
> So data1 has keys 1, 1, 2, 5.
> Likewise data2 has keys 1, 2, 3, 3, 3, 4 and d
On Thu, 29 Sep 2016 05:10 am, Tim Chase wrote:
> I've got several iterators sharing a common key in the same order and
> would like to iterate over them in parallel, operating on all items
> with the same key. I've simplified the data a bit here, but it would
> be something like
>
> data1 = [
Tim Chase wrote:
> I've got several iterators sharing a common key in the same order and
> would like to iterate over them in parallel, operating on all items
> with the same key. I've simplified the data a bit here, but it would
> be something like
>
> data1 = [ # key, data1
> (1, "one A"
Here is a slight variation of Chris A's code that does not require
more than a single look-ahead per generator. It may be better
depending on the exact data passed in.
Chris A's version will store all of the items for each output that
have a matching key, which, depending on the expected data, cou
On Thu, Sep 29, 2016 at 5:10 AM, Tim Chase
wrote:
> And I'd like to do something like
>
> for common_key, d1, d2, d3 in magic_happens_here(data1, data2, data3):
> for row in d1:
> process_a(common_key, row)
> for thing in d2:
> process_b(common_key, row)
> for thing in d3
On 9/28/2016 3:10 PM, Tim Chase wrote:
I've got several iterators sharing a common key in the same order and
would like to iterate over them in parallel, operating on all items
with the same key. I've simplified the data a bit here, but it would
be something like
data1 = [ # key, data1
(1