Hi Vincent,
We have ELK cluster in both primary and backup DC. So end goal of consumers
(Logstash) is to index logs in Elasticsearch and show them using Kibana. We
are replicating data in ELKs using mirror maker. It's not possible to
consume from both DCs at the same time as components which produ
What is the end results done by your consumers ?
>From what I understand, having the need for no duplicates means that these
duplicates can show up somewhere ?
According your needs, you can also have consumers in the two DC consuming
from both. Then you don't have duplicate because a message is ei
Hi Vincent,
Our producers are consumers are indeed local to Kafka cluster. When we
switch DC everything switches. So when we are on backup producers and
consumers on backup DC are active, everything on primary DC is stopped.
Whatever data gets accumulated on backup DC needs to be reflected in
pri
Hi Shantanu
I am not sure the scenario you are describing is the best case. I would
more consider the problem in term of producers and consumers of the data.
Usually is a good practice to put your producer local to your kafka
cluster, so in your case, I would suggest you have producers in the main
Purging will never prevent that it does not get replicated for sure. There will
be always a case (error to purge etc) and then it is still replicated. You may
reduce the probability but it will never be impossible.
Your application should be able to handle duplicated messages.
> On 25. May 201
Hello,
We have cross data center replication. Using Kafka mirror maker we are
replicating data from our primary cluster to backup cluster. Problem arises
when we start operating from backup cluster, in case of drill or actual
outage. Data gathered at backup cluster needs to be reverse-replicated t