Thanks Steve for your tips.
Yes, we found many sacks in packet sequence of problematic connections and
observed there was intermittent network jitter in between. That explained the
behavior seen in our setup.
Regards,
Jeff
On 12/7/17, 7:45 AM, "Steve Mi
This kind of sounds to me like there’s packet loss somewhere and TCP is closing
the window to try to limit congestion. But from the snippets you posted, I
didn’t see any sacks in the tcpdump output. If there *are* sacks, that’d be a
strong indicator of loss somewhere, whether it’s in the netwo
Mirror mare is placed to close to target and send/receive buffer size set
to 10MB which is the result of bandwidth-delay product. OS level tcp buffer
has also been increased to 16MB max
On Wed, 6 Dec 2017 at 15:19 Jan Filipiak wrote:
> Hi,
>
> two questions. Is your MirrorMaker collocated with t
Hi,
two questions. Is your MirrorMaker collocated with the source or the target?
what are the send and receive buffer sizes on the connections that do span
across WAN?
Hope we can get you some help.
Best jan
On 06.12.2017 14:36, Xu, Zhaohui wrote:
Any update on this issue?
We also run into
Any update on this issue?
We also run into similar situation recently. The mirrormaker is leveraged to
replicate messages between clusters in different dc. But sometimes a portion of
partitions are with high consumer lag and tcpdump also shows similar packet
delivery pattern. The behavior is s
Hi,
any pointer will be highly appreciated
On Thu, 30 Nov 2017 at 14:56 tao xiao wrote:
> Hi There,
>
>
>
> We are running into a weird situation when using Mirrormaker to replicate
> messages between Kafka clusters across datacenter and reach you for help in
> case you also encountered this ki
Hi There,
We are running into a weird situation when using Mirrormaker to replicate
messages between Kafka clusters across datacenter and reach you for help in
case you also encountered this kind of problem before or have some insights
in this kind of issue.
Here is the scenario. We have setu