Hi, Had a question on Kafka reassign partitions tool.
We have a 3 node cluster but our replication factor is set to 1. So we have been looking to increase it to 3 for HA. I tried the tool on a couple of topics and it increases the replication factor alright. Also it doesn't change the leader as the leader still is in the RAR. This is how I run it: Json which is used: {"version":1,"partitions":[ {"topic":"cric-engine.engine.eng_fow","partition":3,"replicas":[92,51,101]}]} Earlier config for the topic kafka-topics --describe --topic cric-engine.engine.eng_fow --zookeeper 10.0.4.165:2181,10.0.5.139:2181,10.0.6.106:2181 Topic:cric-engine.engine.eng_fow PartitionCount:5 ReplicationFactor:3 Configs: Topic: cric-engine.engine.eng_fow Partition: 0 Leader: 101 Replicas: 92,51,101 Isr: 101,51,92 Topic: cric-engine.engine.eng_fow Partition: 1 Leader: 51 Replicas: 92,51,101 Isr: 51,101,92 Topic: cric-engine.engine.eng_fow Partition: 2 Leader: 92 Replicas: 92 Isr: 92 Topic: cric-engine.engine.eng_fow Partition: 3 Leader: 101 Replicas: 101 Isr: 101 Topic: cric-engine.engine.eng_fow Partition: 4 Leader: 51 Replicas: 51 Isr: 51 After running: kafka-reassign-partitions --reassignment-json-file increase-replication-factor.json --execute --zookeeper 10.0.4.165:2181, 10.0.5.139:2181,10.0.6.106:2181 partitions 3 Replicas increase: kafka-topics --describe --topic cric-engine.engine.eng_fow --zookeeper 10.0.4.165:2181,10.0.5.139:2181,10.0.6.106:2181 Topic:cric-engine.engine.eng_fow PartitionCount:5 ReplicationFactor:3 Configs: Topic: cric-engine.engine.eng_fow Partition: 0 Leader: 101 Replicas: 92,51,101 Isr: 101,51,92 Topic: cric-engine.engine.eng_fow Partition: 1 Leader: 51 Replicas: 92,51,101 Isr: 51,101,92 Topic: cric-engine.engine.eng_fow Partition: 2 Leader: 92 Replicas: 92 Isr: 92 Topic: cric-engine.engine.eng_fow Partition: 3 Leader: 101 Replicas: 92,51,101 Isr: 101,51,92 Topic: cric-engine.engine.eng_fow Partition: 4 Leader: 51 Replicas: 51 Isr: 51 What I wanted to know is that does it affect the preferred replica? If you see the Replicas, all of them are now 92,51,101 even though the leader has remained the same from before. So, if any of the broker goes down or we run kafka-preferred-replica-election.sh, wouldn't it move all the leaders to broker 92? Is my assesment correct? If yes, then is there a way I can still do this operation by getting leader for a partition first, then adding it to the replica list and then building the json dynamically? Thanks! Sagar.