With a single zk in your zookeeper connect string, broker restarts are vulnerable to a single point of failure. If that zookeeper is offline, the broker will not start. You want at least two zookeepers in the connect string — it’s the same reason you should put more than one kafka broker in client bootstrap configs.
You can probably get away with just updating kafka broker settings with the additional zookeepers and not restarting the broker service, since the additional zookeepers wouldn’t be useful until the next restart, anyway. -- Peter > On Mar 7, 2020, at 8:40 PM, sunil chaudhari <sunilmchaudhar...@gmail.com> > wrote: > > Hi Peter, > That was great explanation. > However I have question about the last stage where you mentioned to update > the zookeeper server in the services where single zookeeper is used. > Why do I need to do that? > Is it because only single zookeeper is used and you want to make sure high > availability of zookeeper? > > What if tomorrow I add 2 more instances of zookeeper, total 5. Is it > required to update 2 new zK instances to my kafka brokers? > > > Regards, > Sunil. > >> On Sat, 7 Mar 2020 at 11:08 PM, Peter Bukowinski <pmb...@gmail.com> wrote: >> >> This change will require brief interruption for services depending on the >> current zookeeper — but only for the amount of time it takes the service on >> the original zookeeper to restart. Here’s the basic process: >> >> 1. Provision two new zookeepers hosts, but don’t start the service on the >> new hosts. >> 2. Edit the zoo.cfg file on all hosts to contain the following lines >> (assuming default ports): >> >> server.1=ORIGINAL_ZK_IP:2888:3888 >> server.2=SECOND_ZK_IP:2888:3888 >> server.3=THIRD_ZK_IP:2888:3888 >> >> 3. Ensure the myid file on the second node contains ‘2’ and on the third >> node contains ‘3' >> 4. Start the second and third zookeeper services and ensure they have >> become followers: >> >> echo stat | nc ZK2_IP 2181 | grep state >> echo stat | nc ZK3_IP 2181 | grep state >> >> 5. Restart the original zookeeper service and then check the state of all >> three zookeepers >> >> echo stat | nc ZK1_IP 2181 | grep state >> echo stat | nc ZK2_IP 2181 | grep state >> echo stat | nc ZK3_IP 2181 | grep state >> >> You should see that one of the new zookeepers has become the leader. >> >> Now all that’s left to do is update your zookeeper server strings in the >> services that were previously using the single zookeeper. >> >> Hope this helped! >> >> — >> Peter >> >>>> On Mar 6, 2020, at 12:50 PM, JOHN, BIBIN <bj9...@att.com> wrote: >>> >>> Team, >>> I currently have a 1 node ZK cluster and which is working fine. Now I >> want to add additional 2 more nodes to ZK cluster. Could you please provide >> best practice so I don't loose existing data? >>> >>> >>> Thanks >>> Bibin John >> >>