Hi Hans, Thank you for your reply.
Its basically two different server rooms on different floors and they are connected with fiber connectivity so its almost like a local connection between them no network latencies / lag. If i do a Mirror Maker / Replicator then i will not be able to use them at the same time for writes./ producers. because the consumers / producers will request from all of them BR, Lee On Mon, Mar 6, 2017 at 2:50 PM, Hans Jespersen <h...@confluent.io> wrote: > What do you mean when you say you have "2 sites not datacenters"? You > should be very careful configuring a stretch cluster across multiple sites. > What is the RTT between the two sites? Why do you think that MIrror Maker > (or Confluent Replicator) would not work between the sites and yet you > think a stretch cluster will work? That seems wrong. > > -hans > > /** > * Hans Jespersen, Principal Systems Engineer, Confluent Inc. > * h...@confluent.io (650)924-2670 > */ > > On Mon, Mar 6, 2017 at 5:37 AM, Le Cyberian <lecyber...@gmail.com> wrote: > > > Hi Guys, > > > > Thank you very much for you reply. > > > > The scenario which i have to implement is that i have 2 sites not > > datacenters so mirror maker would not work here. > > > > There will be 4 nodes in total, like 2 in Site A and 2 in Site B. The > idea > > is to have Active-Active setup along with fault tolerance so that if one > of > > the site goes on the operations are normal. > > > > In this case if i go ahead with 4 node-cluster of both zookeeper and > kafka > > it will give failover tolerance for 1 node only. > > > > What do you suggest to do in this case ? because to divide between 2 > sites > > it needs to be even number if that makes sense ? Also if possible some > help > > regarding partitions for topic and replication factor. > > > > I already have Kafka running with quiet few topics having replication > > factor 1 along with 1 default partition, is there a way to repartition / > > increase partition of existing topics when i migrate to above setup ? I > > think we can increase replication factor by Kafka rebalance tool. > > > > Thanks alot for your help and time looking into this. > > > > BR, > > > > Le > > > > On Mon, Mar 6, 2017 at 12:20 PM, Hans Jespersen <h...@confluent.io> > wrote: > > > > > Jens, > > > > > > I think you are correct that a 4 node zookeeper ensemble can be made to > > > work but it will be slightly less resilient than a 3 node ensemble > > because > > > it can only tolerate 1 failure (same as a 3 node ensemble) and the > > > likelihood of node failures is higher because there is 1 more node that > > > could fail. > > > So it SHOULD be an odd number of zookeeper nodes (not MUST). > > > > > > -hans > > > > > > > > > > On Mar 6, 2017, at 12:20 AM, Jens Rantil <jens.ran...@tink.se> > wrote: > > > > > > > > Hi Hans, > > > > > > > >> On Mon, Mar 6, 2017 at 12:10 AM, Hans Jespersen <h...@confluent.io> > > > wrote: > > > >> > > > >> A 4 node zookeeper ensemble will not even work. It MUST be an odd > > number > > > >> of zookeeper nodes to start. > > > > > > > > > > > > Are you sure about that? If Zookeer doesn't run with four nodes, that > > > means > > > > a running ensemble of three can't be live-migrated to other nodes > > > (because > > > > that's done by increasing the ensemble and then reducing it in the > case > > > of > > > > 3-node ensembles). IIRC, you can run four Zookeeper nodes, but that > > means > > > > quorum will be three nodes, so there's no added benefit in terms of > > > > availability since you can only loose one node just like with a three > > > node > > > > cluster. > > > > > > > > Cheers, > > > > Jens > > > > > > > > > > > > -- > > > > Jens Rantil > > > > Backend engineer > > > > Tink AB > > > > > > > > Email: jens.ran...@tink.se > > > > Phone: +46 708 84 18 32 > > > > Web: www.tink.se > > > > > > > > Facebook <https://www.facebook.com/#!/tink.se> Linkedin > > > > <http://www.linkedin.com/company/2735919?trk=vsrp_ > > > companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670% > > > 2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary> > > > > Twitter <https://twitter.com/tink> > > > > > >