Hey,

yes, at first it specified client.id in the config (I was not sure if which
one is needed):

source->destination.producer.client.id = "mm2"

source->destination.consumer.client.id = "mm2"

source.producer.client.id = "mm2"

source.consumer.client.id = "mm2"

destination.producer.client.id = "mm2"

destination.consumer.client.id = "mm2"

and set throttling this way:

/opt/app/kafka/kafka/bin/kafka-configs.sh --zookeeper test-zk:2181 --alter
--add-config consumer_byte_rate=200000 --entity-type clients --entity-name
mm2

but it did not seem to work and I saw in the logs that the client id is
generated based on the above config and it is actually in the form
of: consumer-source-mm2-1, consumer-source-mm2-2, ...   so I tried that as
well but without luck, maybe I just did not set the throttle low enough?
So I tried setting the throttle for all clients using the default setting
on the destination cluster:

/opt/app/kafka/kafka/bin/kafka-configs.sh --zookeeper test-zk-backup:2181
--alter --add-config producer_byte_rate=200000 --entity-type clients
--entity-default

that's when I observed that if I set the throttle value to a low enough
value that really slows down the mirroring traffic (i checked network
utilization), then I started to see "Failed to flush" error messages.

So if anyone has a good practice on how it is done, please share.

Peter



On Wed, 8 Jan 2020 at 17:34, Ryanne Dolan <ryannedo...@gmail.com> wrote:

> Peter, have you tried overriding the client ID used by MM2's consumers?
> Otherwise, the client IDs are dynamic, which would make it difficult to
> throttle using quotas.
>
> Ryanne
>
> On Wed, Jan 8, 2020, 10:12 AM Péter Sinóros-Szabó
> <peter.sinoros-sz...@transferwise.com.invalid> wrote:
>
> > Hi,
> >
> > I'd like to throttle the mirroring process when I start Mirror Maker 2 at
> > the first time, so it starts to pull all the messages that exists on the
> > source cluster. I'd like to do this only to avoid putting too much
> traffic
> > on the source cluster that may slow down existing production client on
> it.
> >
> > I tried several quota setups on both the source and destination clusters,
> > both none of them worked.
> > - it either did not have any affect
> > - or slowed down the mirroring but also cause issues like
> > ERROR WorkerSourceTask{id=MirrorHeartbeatConnector-0} Failed to flush,
> > timed out while waiting for producer to flush outstanding 115 messages
> >
> > Is there a good practice on how to initialize/bootstrap a MirrorMaker
> > cluster on an existing Kafka cluster?
> >
> > Cheers,
> > Peter
> >
>


-- 
 - Sini

Reply via email to