Hi,

We have a kafka streams application that consumes from multiple topic with
different keys. Before processing these events in the application, we want
to repartition those events on a single key that will ensure related events
are processed by same application instance. e.g. the events on multiple
topics are related to same call however they are keyed on multiple keys on
those topics and hence go to different application instances but after
consuming those events by the streams, we want to partition them again on
the call id and then process them in that order. Is this possible?
I got to know about through() method on the KStream which seems to be doing
similar thing however not sure if it would achieve the below functionality:

Call with id C123 is initiated and following events arrive on 3 topics with
respective keys:

Event 1 on TopicA: with key a1
Event 2 on TopicB: with key b1
Event 3 on TopicC: with key c1

Let's assume these are consumed by 3 different instances of kafka streams
application however those application process them with through() method
with a consolidatedTopic with the key C123.
Will this ensure that these 3 events go to the same partition on
consolidatedTopic, and hence after repartitioning, those will be consumed
by same instance of application?

Reply via email to