Matthias,

With option 2, how would we perform join after re-partition. Although we
re-partitioned with value, the key doesn't change.
KTable joins always use keys and ValueJoiner get values from both table
when keys match.

Having data co-located will not be sufficient rt??

Srikanth

On Thu, Jul 14, 2016 at 4:12 AM, Matthias J. Sax <matth...@confluent.io>
wrote:

> I would recommend re-partitioning as described in Option 2.
>
> -Matthias
>
> On 07/13/2016 10:53 PM, Srikanth wrote:
> > Hello,
> >
> > I'm trying the following join using KTable. There are two change log
> tables.
> > Table1
> >   111 -> aaa
> >   222 -> bbb
> >   333 -> aaa
> >
> > Table2
> >   aaa -> 999
> >   bbb -> 888
> >   ccc -> 777
> >
> > My result table should be
> >   111 -> 999
> >   222 -> 888
> >   333 -> 999
> >
> > Its not a case for join() as the keys don't match. Its more a lookup
> table.
> >
> > Option1 is to use a Table1.toStream().process(ProcessSupplier(),
> > "storeName")
> > punctuate() will use regular kafka consumer that reads updates from
> Table2
> > and updates a private map.
> > Process() will do a key-value lookup.
> > This has an advantage when Table1 is much larger than Table2.
> > Each instance of the processor will have to hold entire Table2.
> >
> > Option2 is to re-partition Table1 using through(StreamPartitioner) and
> > partition using value.
> > This will ensure co-location. Then join with Table2. This part might be
> > tricky??
> >
> > Your comments and suggestions are welcome!
> >
> > Srikanth
> >
>
>

Reply via email to