Hi, I'd like to have your comments on the problem I met while testing my app with kafka streams (0.10.2.1) Roughly, my stream app has 2 input topics : . the first one has 4 partitions (main data) . the second one has only one partition and receives messages from time to time
At first, I supposed I had 2 sub topologies A : With the first topic, I build a state store using process() and I also have punctuate activated . B : The second topic is used to trigger an analysis using the state store data with process() (both processes use the same kafka topic as sink) During tests I realised the content of the state store viewed by this process B is only based on data received on partition 0 fo the first topic I finally understood the link between those 2 sub-topologies forced the system to see it as one unique topology and have only one task by partition reading the first and second topic; am I right ? I imagined 2 options to solve this issue option 1 : replace topology B by a consumer on second topic that will trigger a query on statestore option 2 : have 4 partitions for topic 2 and write the same message in the 4 partitions I tested both but not sure which one is better ... Do you have any other suggestions or comments Thanks in advance. Hugues