I got it to work by raising a kafka connect from which to launch the mm2.
Silly question.
El lun, 20 mar 2023 a las 23:09, Miguel Ángel Fernández Fernández (<
miguelangelprogramac...@gmail.com>) escribió:
> I have two clusters up on the same machine with docker-compose
>
> services:
> zookeeper
I have two clusters up on the same machine with docker-compose
services:
zookeeper-lab:
image: "bitnami/zookeeper:3.8.1"
restart: always
environment:
ZOO_PORT_NUMBER: 2183
ALLOW_ANONYMOUS_LOGIN: "yes"
ports:
- "2183:2183"
- "2886:2888"
- "3886:3888"
kafk
Mi Miguel,
How many nodes are you running MM2 with? Just one?
Separately, do you notice anything at ERROR level in the logs?
Cheers,
Chris
On Mon, Mar 20, 2023 at 5:35 PM Miguel Ángel Fernández Fernández <
miguelangelprogramac...@gmail.com> wrote:
> Hello,
>
> I'm doing some tests with Mirror
yep!
On Wed, Jul 21, 2021, 3:18 AM Tomer Zeltzer
wrote:
> Hi,
>
> Can I use MirrorMaker2.0 from Kafka 2.8.0 with Kafka version 2.4.0?
>
> Thanks,
> Tomer Zeltzer
>
> This email and the information contained herein is proprietary and
> confidential and subject to the Amdocs Email Terms of Ser
Hi Madhan,
try this article I found a while back in case this also become my use case
https://stackoverflow.com/questions/59390555/is-it-possible-to-replicate-kafka-topics-without-alias-prefix-with-mirrormaker2
On Thu, Apr 22, 2021 at 9:40 PM Dhanikachalam, Madhan (CORP)
wrote:
> I am testing
Hey Madhan,
The easiest way to get rid of aliases in the topic names is to add the
following to your config:
replication.policy.separator=
source.cluster.alias=
target.cluster.alias=
On Thu, Apr 22, 2021 at 11:40 PM Dhanikachalam, Madhan (CORP)
wrote:
> I am testing MM2. I got the connector
gt; where is mentioned the “at-least” delivery guarantee? Just for the record.
>
> Kind Regards,
>
> Από: Ning Zhang
> Αποστολή: Τετάρτη, 17 Μαρτίου 2021 22:39
> Προς: users@kafka.apache.org
> Θέμα: Re: Mirrormaker 2.0 - duplicates with idempotence enabled
>
> Hello Vang
: Mirrormaker 2.0 - duplicates with idempotence enabled
Hello Vangelis,
By default, current MM 2.0 guarantees "at-least" once delivery guarantee,
meaning there will be duplicate messages under some failure scenarios.
If you prefer to no-message loss, there is a pending PR about MM
Hello Vangelis,
By default, current MM 2.0 guarantees "at-least" once delivery guarantee,
meaning there will be duplicate messages under some failure scenarios.
If you prefer to no-message loss, there is a pending PR about MM 2.0
https://issues.apache.org/jira/browse/KAFKA-10339
On 2021/03/10
Hi Ryanne/Josh,
I'm working on active-active mirror maker and while translating consumer
offset from source- cluster A to dest cluster B. any pointer would be helpful .
Cluster A
Cluster Name--A
Topic name: testA
Consumer group name: mm-testA-consumer
Cluster -B
Cluster Name--B
Topic name: sou
Josh, make sure there is a consumer in cluster B subscribed to A.topic1.
Wait a few seconds for a checkpoint to appear upstream on cluster A, and
then translateOffsets() will give you the correct offsets.
By default MM2 will block consumers that look like kafka-console-cosumer,
so make sure you sp
Thanks again Ryanne, I didn't realize that MM2 would handle that.
However, I'm unable to mirror the remote topic back to the source cluster
by adding it to the topic whitelist. I've also tried to update the topic
blacklist and remove ".*\.replica" (since the blacklists take precedence
over the whi
Josh, if you have two clusters with bidirectional replication, you only get
two copies of each record. MM2 won't replicate the data "upstream", cuz it
knows it's already there. In particular, MM2 knows not to create topics
like B.A.topic1 on cluster A, as this would be an unnecessary cycle.
> is
Sorry, correction -- I am realizing now it would be 3 copies of the same
topic data as A.topic1 has different data than B.topic1. However, that
would still be 3 copies as opposed to just 2 with something like topic1 and
A.topic1.
As well, if I were to explicitly replicate the remote topic back to
Thanks for the clarification Ryanne. In the context of active/active
clusters, does this mean there would be 6 copies of the same topic data?
A topics:
- topic1
- B.topic1
- B.A.topic1
B topics:
- topic1
- A.topic1
- A.B.topic1
Out of curiosity, is there a reason for MM2 not emitting checkpoint
Josh, yes it's possible to migrate the consumer group back to the source
topic, but you need to explicitly replicate the remote topic back to the
source cluster -- otherwise no checkpoints will flow "upstream":
A->B.topics=test1
B->A.topics=A.test1
After the first checkpoint is emitted upstream,
.com/Etionlimited> |
Instagram<https://www.instagram.com/Etionlimited/>
From: Sönke Liebau
Sent: Wednesday, 18 March 2020 1:12 PM
To: users@kafka.apache.org
Subject: Re: Mirrormaker 2.0 and compacted topics
Hi Pirow,
records at the same offset as in the original topic is not possible for non
com
Etionlimited> | Instagram
> <https://www.instagram.com/Etionlimited/>
>
>
>
> *From:* Sönke Liebau
> *Sent:* Wednesday, 18 March 2020 12:14 PM
> *To:* users@kafka.apache.org
> *Subject:* Re: Mirrormaker 2.0 and compacted topics
>
>
>
> Hi Pirow,
>
>
.youtube.com/channel/UCUY-5oeACtLk2uTsEjZCU6A> |
LinkedIn<https://www.linkedin.com/company/etionltd> |
Twitter<https://twitter.com/Etionlimited> |
Instagram<https://www.instagram.com/Etionlimited/>
From: Sönke Liebau
Sent: Wednesday, 18 March 2020 12:14 PM
To: users@kaf
Hi Pirow,
as far as I understand MirrorMaker 2.0 will not treat compacted topics any
different from uncompacted topics.
What that means for your scenario is that your replication may miss some
messages in the case of a long unavailability, if those messages were
compacted in the meantime. However
Ok, I see. I almost started to work on it, but figured out that we do not
need it now.
Thanks for the help around this topic :)
Peter
On Tue, 21 Jan 2020 at 21:04, Ryanne Dolan wrote:
> Peter, the LegacyReplicationPolicy class is described in the existing
> KIP-382 and is a requirement for the
Peter, the LegacyReplicationPolicy class is described in the existing
KIP-382 and is a requirement for the deprecation of MM1. I was planning to
implement it but would love the help if you're interested.
Ryanne
On Tue, Jan 21, 2020, 8:25 AM Péter Sinóros-Szabó
wrote:
> Ryanne,
>
> I didn't do m
Ryanne,
I didn't do much work yet, just checked the Interface to see if it is easy
to implement or not.
> The PR for LegacyReplicationPolicy should include any relevant fixes to
get it to run without crashing
Do you mean that there is already a PR for LegacyReplicationPolicy? If
there is, please
Peter, KIP-382 includes LegacyReplicationPolicy for this purpose, but no,
it has not been implemented yet. If you are interested in writing the PR,
it would not require a separate KIP before merging. Looks like you are
already doing the work :)
It is possible, as you point out, that returning null
Hi Sebastian & Ryanne,
do you have maybe an implementation of this is just some ideas about how to
implement the policy that does not rename topics?
I am checking the ReplicationPolicy interface and don't really know what
the impact will be if I implement this:
public String formatRemoteTopic(Str
Peter, that's right. So long as ReplicationPolicy is implemented with
proper semantics (i.e. the methods do what they say they should do) any
naming convention will work. You can also use something like double
underscore "__" as a separator with DefaultReplicationPolicy -- it doesn't
need to be a s
Hi Ryanne,
Am I right that as far as I implement ReplicationPolicy properly, those
features you just mentioned will work fine?
Asking because we already use dot(.) underscore(_) and even hyphen(-)
characters in not replicated topics :D , so it seems to be that we will
need a more advanced renamin
Hello Ryanne,
thank you, that helps to get a better understanding.
We'll just wait until something better is available and until then use
the legacy-mode of MM2...
Best regards
Sebastian
On 30-Dec-19 7:04 PM, Ryanne Dolan wrote:
Is there a way to prevent that from happening?
Unfortunatel
> Is there a way to prevent that from happening?
Unfortunately there is no tooling (yet?) to manipulate Connect's offsets,
so it's difficult to force MM2 to skip ahead, reset, etc.
One approach is to use Connect's Simple Message Transform feature. This
enables you to filter the messages being rep
Sebastian, you can drop in a custom jar in the "Connect plug-in path" and
MM2 will be able to load it. That enables you to implement your own
ReplicationPolicy (and other pluggable interfaces) without compiling
everything.
In an upcoming release we'll have a "LegacyReplicationPolicy" that does not
Hello,
I found that it's using the DefaultReplicationPolicy that always returns
"sourceClusterAlias + separator + topic" with only the separator being
configurable in the configuration-file with REPLICATION_POLICY_SEPARATOR.
It seems like I need a different ReplicationPolicy, like a
SimpleRe
Hello,
another thing I found and didn't find any configuration in the KIP yet
was that if I have two clusters (source and target) and a topic
"replicateme" on the source-cluster it will get replicated to the
target-cluster as "source.replicateme".
How can I stop it from adding the cluster-na
Hello Ryanne,
Are there any plans to implement an easy to use throttling to be a little
more kind with the cluster that we start to replicate?
I guess it is possible to use the existing throttling in the source and
destination clusters, but it is not really easy to use.
Also maybe an option to st
Hello Ryanne,
Is there a way to prevent that from happening? We have two separate
clusters with some topics being replicated to the second one for
reporting. If we replicate everything again that reporting would
probably have some problems.
Yes, I wondered when the Networking-guys would come
Glad to hear you are replicating now :)
> it probably started mirroring the last seven days as there was no offset
for the new consumer-group.
That's correct -- MM2 will replicate the entire topic, as far back as the
retention period. However, technically there are no consumer groups in MM2!
550
Sebastian, there are multiple ways to run MM2. One way is to start the
individual Connectors (MirrorSourceConnector, MirrorCheckpointConnector,
and MirrorHeartbeatConnector) on an existing Connect cluster, if you have
one. Some of the configuration properties you've listed, e.g. "name" and
"connect
Hello again!
Some probably important configs I found out:
We need this to enable mirroring as it seems to disabled by default?
source->target.enabled = true
target->source.enabled = true
Also, the Client-IDs can be configured using:
source.client.id = my_cool_id
target.client.id = my_cooler_i
Hello,
I tried running this connect-mirror-config:
name = $MIRROR_NAME
clusters = source, target
source.bootstrap.servers = $SOURCE_SERVERS
target.bootstrap.servers = $TARGET_SERVERS
source->target.topics = $SOURCE_TARGET_TOPICS
target->source.topics = $TARGET_SOURCE_TOPICS
source->target.emit.
Hello Sebastian, please let us know what issues you are facing and we can
probably help. Which config from the KIP are you referencing? Also check
out the readme under ./connect/mirror for more examples.
Ryanne
On Mon, Dec 23, 2019, 12:58 PM Sebastian Schmitz <
sebastian.schm...@propellerhead.co.
I find the best is the README in the source. Look under connect mirror
maker directory I believe.
Carl
On Mon, Dec 23, 2019, 13:57 Sebastian Schmitz <
sebastian.schm...@propellerhead.co.nz> wrote:
> Hello,
>
> I'm currently trying to implement the new Kafka 2.4.0 and the new MM2.
>
> However, it
I can verify that the above did take. ( kicking myself ) . It should be the
same for, these too ?
b.producer.batch.size = 1048576
b.producer.linger.ms = 30
b.producer.acks = 1
etc etc...
I also see that the properties can be overridden, so this routine
* kill 1 MM2
* change the mm2.prope
> BTW any ideas when 2.4 is being released
Looks like there are a few blockers still.
On Mon, Nov 4, 2019 at 2:06 PM Vishal Santoshi
wrote:
> I bet I have tested the "b.producer.acks' route. I will test again and let
> you know. Note that I resorted to hardcoding that value in the Sender and
>
I bet I have tested the "b.producer.acks' route. I will test again and let
you know. Note that I resorted to hardcoding that value in the Sender and
that alleviated the throttle I was seeing on consumption. BTW any ideas
when 2.4 is being released ( I thought it was Oct 30th 2019 )...
On Mon, Nov
Vishal, b.producer.acks should work, as can be seen in the following unit
test with similar producer property "client.id":
https://github.com/apache/kafka/blob/6b905ade0cdc7a5f6f746727ecfe4e7a7463a200/connect/mirror/src/test/java/org/apache/kafka/connect/mirror/MirrorMakerConfigTest.java#L182
Kee
Jeremy, please see relevant changes documented here:
https://github.com/apache/kafka/blob/cae2a5e1f0779a0889f6cb43b523ebc8a812f4c2/connect/mirror/README.md#multicluster-environments
I've added a --clusters argument which makes XDCR a lot easier to manage,
obviating the configuration race issue.
Jeremy, thanks for double checking. I think you are right -- this is a
regression introduced here [1]. For context, we noticed that heartbeats
were only being sent to target clusters, whereas they should be sent to
every cluster regardless of replication topology. To get heartbeats running
everywhe
Apologies, copy/paste issue. Config should look like:
In DC1:
DC1->DC2.enabled = true
DC2->DC1.enabled = false
In DC2:
DC1->DC2.enabled = false
DC2->DC1.enabled = true
Running 1 mm2 node in DC1 / DC2 each. If I start up the DC1 node first,
then DC1 data is replicated to DC2. DC2 data does n
Hey Jeremy, it looks like you've got a typo or copy-paste artifact in the
configuration there -- you've got DC1->DC2 listed twice, but not the
reverse. That would result in the behavior you are seeing, as DC1 actually
has nothing enabled. Assuming this is just a mistake in the email, your
approach
48 matches
Mail list logo