Hello,

Working on a multi data center Kafka installation in which all clusters have 
the same topics, the producers will be able to connect to any of the clusters. 
Would like the ability to dynamically control the set of clusters a producer 
will be able to connect to, that will allow to gracefully take a cluster 
offline for maintenance.
Current design is to have one zk cluster that is across all data centers and 
will have info regarding what in which cluster a service is available.

In the case of Kafka it will house the info needed to populate 
bootstrap.servers, a wrapper will be placed around the Kafka producer and will 
watch this ZK value. When the value will change the producer instance with the 
old value will be shut down and a new producer with the new bootstrap.servers 
info will replace it.

Is there a best practice for achieving this?

Is there a way to dynamically update bootstrap.servers?

Does the producer always go to the same machine from bootstrap.servers when it 
refreshes the MetaData after metadata.max.age.ms has expired?

Thanks!

Reply via email to