Hi, we have a Kafka streams application which merges (merge, groupByKey, aggretgate) a few topics into one topic. The application is stateful, of course. There are currently six instances of the application running in parallel.
We had an issue where one new Topic for aggregation did have another partition count than all other topics. This caused data corruption in our application. We expected that a re-partitioning topic would be created automatically by Kafka streams or that we would get an error. But this did not happen. Instead, some of the keys (all merged topics share the same key schema) found their way into at least two different instances of the application. One key is available in more than one local state store. Can you explain why this happened? As already said, we would have expected to get an error or a re-partitioning topic in this case. Cheers Kay