I am about to put a topology into production and I am concerned that I don't know how to repartition/rebalance the topics in the event that I need to add more partitions.
My inclination is that I should spin up a new cluster and run some kind of consumer/producer combination that takes data from the previous cluster and writes it to the new cluster. A new instance of the Kafka Streams application then works against this new cluster. But I'm not sure how to best execute this, or whether this approach is sound at all. I am imagining many things may go wrong. Without going into further speculation, what is the best way to do this? Thank you, Dmitry