> Does the producer have a ( limit or time value ) for when to drop messages IIUC: https://kafka.apache.org/documentation/#producerconfigs mostly buffer.memory and request.timeout.ms
> Also, can the producer indicate to its source this event is happing. Someone more familiar with MM2 would have to answer that. :) On Thu, Jan 9, 2020 at 11:54 AM Modster, Anthony < anthony.mods...@teledyne.com> wrote: > Hello > Does the producer have a ( limit or time value ) for when to drop > messages, when the QOS is low. > > Also, can the producer indicate to its source this event is happing. > > -----Original Message----- > From: Andrew Otto <o...@wikimedia.org> > Sent: Thursday, January 9, 2020 8:32 AM > To: users@kafka.apache.org > Subject: Re: Where to run MM2? Source or destination DC/region? > > ---External Email--- > > Hi Peter, > > My understanding here comes from MirrorMaker 1, but I believe it holds for > MM2 (someone correct me if I am wrong!) > For the most part, if you have no latency or connectivity issues, running > MM at the source will be fine. However, the failure scenario is different > if something goes wrong. > > When running at the destination, it is the kafka consumer that has to > cross the network boundary. If the consumer can't consume, it can always > pick off from where it left off later. > > When running at the source, it is the kafka producer that has to cross the > network boundary. If the producer can't produce, it will eventually drop > messages. > > > > On Thu, Jan 9, 2020 at 11:28 AM Péter Sinóros-Szabó > <peter.sinoros-sz...@transferwise.com.invalid> wrote: > > > Hey, > > > > I am thinking about where (well in which AWS region) should I run MM2. > > I might be wrong, but as I know it is better to run it close to the > > destination cluster. > > But for other reasons, it would be much easier for me to run it at the > > source. > > So is it still advised to run MM2 at the destination? > > Latency between source and destination is about 32ms. > > What are the downsides if I run it at the source? > > > > Thanks, > > Peter > > >