This change will require brief interruption for services depending on the current zookeeper — but only for the amount of time it takes the service on the original zookeeper to restart. Here’s the basic process:
1. Provision two new zookeepers hosts, but don’t start the service on the new hosts. 2. Edit the zoo.cfg file on all hosts to contain the following lines (assuming default ports): server.1=ORIGINAL_ZK_IP:2888:3888 server.2=SECOND_ZK_IP:2888:3888 server.3=THIRD_ZK_IP:2888:3888 3. Ensure the myid file on the second node contains ‘2’ and on the third node contains ‘3' 4. Start the second and third zookeeper services and ensure they have become followers: echo stat | nc ZK2_IP 2181 | grep state echo stat | nc ZK3_IP 2181 | grep state 5. Restart the original zookeeper service and then check the state of all three zookeepers echo stat | nc ZK1_IP 2181 | grep state echo stat | nc ZK2_IP 2181 | grep state echo stat | nc ZK3_IP 2181 | grep state You should see that one of the new zookeepers has become the leader. Now all that’s left to do is update your zookeeper server strings in the services that were previously using the single zookeeper. Hope this helped! — Peter > On Mar 6, 2020, at 12:50 PM, JOHN, BIBIN <bj9...@att.com> wrote: > > Team, > I currently have a 1 node ZK cluster and which is working fine. Now I want to > add additional 2 more nodes to ZK cluster. Could you please provide best > practice so I don't loose existing data? > > > Thanks > Bibin John