Hi, We have a 13 node Kafka cluster and each broker has multiple disks and all topics have replication factor 3.
Broker 6 had a hardware issue and required a complete OS reload (Linux) and 2 disk replacements. Now I installed Kafka again on this node with the same broker id 6 but started to get an exception from all producers - *[Error 6] NotLeaderForPartitionError: ProduceResponsePayload(topic=u'amyTopic', partition=7, error=6, offset=-1)* I am assuming that since I am using the same broker ID, it (zookeeper? or controller broker?) is expecting data in the disk which got replaced or some other meta info that might get wiped out during OS reload. What are the options I have to *add this node back to the cluster* without much disturbance to the cluster and without data loss? Should I use a new broker ID for this node and then repartition the data of every topic as we do after adding a new node? We have a lot of data (a few hundred TB) in the cluster and I am trying to avoid huge data movement caused by the repartition of data and it may choke the entire cluster. Please suggest.