Anup,
I realized I forgot to mention this in the previous message, sorry about
that:
The additional workarounds to restart the connectors or dynamically
reconfigure the log level will only work for MirrorMaker 2.0 running on a
regular Connect cluster which has the REST API enabled.
The MirrorMake
Arpit,
I am not very familiar with MirrorMaker unfortunately so I won't be able to
give you any specific advice.
I also don't see any MirrorMaker-specific changes that would be relevant,
except for some minor arguments changes and the deprecation landing in 3.0.
> Its very random. It replicates f
Hi Greg,
Thanks for getting back to me. Please find more details below
1. Are you using MirrorMaker, or MirrorMaker 2.0?
Mirror maker
2. What version of MM or MM2 are you using, and with what Kafka broker
version?
3.2.3
3. How is your replication flow configured?
We have upstream brokers (3 node
Arpit,
Unfortunately from that description nothing specific is coming to mind.
The max.poll.interval indicates that the consumer is losing contact with
the Kafka cluster, but that may be caused by the replication application
hanging somewhere else.
Some clarifying questions, and things you can lo
Anup,
Here's the best workaround I can think of:
I think you can reconfigure the mechanisms which trigger task
reconfiguration with:
* `refresh.topics.enabled`
* `refresh.topics.interval.seconds`
* `refresh.groups.enabled`
* `refresh.groups.interval.seconds`
Disabling these mechanisms will preve
Frank,
> I'm operating on the assumption that the connectors in question get stuck
in an inconsistent state
> Another thought... if an API exists to list all connectors in such a
state, then at least some monitoring/alerting could be put in place, right?
There is two different inconsistencies rel
Another thought... if an API exists to list all connectors in such a state,
then at least some monitoring/alerting could be put in place, right?
So I've been looking into the codebase to familiarize myself with it.I'm
operating on the assumption that the connectors in question get stuck in an
inconsistent state which causes them to prune the new task configs from those
which are "broadcast" to the workers.I see on
KafkaConfigBackingSto
Yes, that makes sense thanks.
But the side effect of this is there is enormous amount of log generated.
Is there a quick solution possible to slow down the logs.
Cheers.
From: Greg Harris
Date: Wednesday, 8 February 2023 at 1:08 pm
To: users@kafka.apache.org
Subject: Re: Mirror maker worker can
Hi Gonzalo,
For the produce request record version, you should refer to this file:
https://github.com/apache/kafka/blob/trunk/clients/src/main/resources/common/message/ProduceRequest.json#L35
But you're right, basically the message conversion happened in a very old
produce request version (ex: ve
10 matches
Mail list logo