Hi Neha, Thanks for your reply.
Now using MM tool to replicate data between Kafka clusters, But I am facing one problem, Messages gets duplicated if MM killed forcefully[ *kill -9* ]. Is there any solution to avoid this duplicated entry in target cluster? I am using Kafka *8.1.1.* On Mon, Dec 8, 2014 at 11:17 PM, Neha Narkhede <n...@confluent.io> wrote: > Hi Madhukar, > > From the same documentation link you referred to - > > The source and destination clusters are completely independent entities: > > they can have different numbers of partitions and the offsets will not be > > the same. For this reason the mirror cluster is not really intended as a > > fault-tolerance mechanism (as the consumer position will be different); > for > > that we recommend using normal in-cluster replication. The mirror maker > > process will, however, retain and use the message key for partitioning so > > order is preserved on a per-key basis. > > > There is no way to setup an *exact* Kafka mirror yet. > > Thanks, > Neha > > On Mon, Dec 8, 2014 at 7:47 AM, Madhukar Bharti <bhartimadhu...@gmail.com> > wrote: > > > Hi, > > > > I am going to setup Kafka clusters having 3 brokers in Datacenter 1. > Topics > > can be created time to time. Each topic can have varying partitions > mostly > > 1,10 or 20. Each application might have different partitioning algorithm > > that we don't know(let it be hidden from ops team). > > > > We want to setup mirror maker tool in such a way so that, the exact > > partitioned data should go to the same partition without knowing the > Topics > > partition logic and it should be *generalized*. [This should be common > for > > all Topics.] > > > > *like partition 0 at DataCenter1 should be exact mirror of partition-0 > in > > Datacenter2*. > > > > Please suggest me a solution for doing so. If MirrorMaker > > <http://kafka.apache.org/documentation.html#basic_ops_mirror_maker> tool > > provide any configurations which solve this use-case please let me know. > > > > > > > > Regards, > > Madhukar Bharti > > > > > > -- > Thanks, > Neha > -- Thanks and Regards, Madhukar Bharti