Hello,
We have cross data center replication. Using Kafka mirror maker we are
replicating data from our primary cluster to backup cluster. Problem arises
when we start operating from backup cluster, in case of drill or actual
outage. Data gathered at backup cluster needs to be reverse-replicated t
Your understanding is correct.
Unfortunately, a regression slipped into 1.0 release such that the
described optimization is not done... It's fixed in upcoming 2.0 release.
-Matthias
On 5/24/18 4:52 PM, Todd Hughes wrote:
> From what I've read, a Ktable directly sourced from a compacted topic is
Hi Shantanu,
If you are using kafka stream, upgrade to the latest jar. There are a bunch
of fixes in the way it uses kafka consumers.
Apart from this: try these settings
1. Set the session.timeout.ms value higher, to something like 30
2. Set the heartbeat.interval.ms to lower value, something
From what I've read, a Ktable directly sourced from a compacted topic is smart
enough to not use a change log in the background. I must be doing something
wrong though as I have a setup similar to below and I can see on the broker a
topic named something like myappid-myStore-changelog is actual
Hey Vincent.
That's exactly how my code is. I am doing processing within that for loop.
In KIP-62 I read that heartbeat happens via a separate thread
https://github.com/dpkp/kafka-python/issues/948. But you are saying it
happens through polling. What can be considered true? I have set
session.tim
Shantanu, I was more referering to you application code.
You should have something similar to :
while (true) {
ConsumerRecords records = consumer.poll(100);
for (ConsumerRecord record : records) {
// Your logic
}
}
You should make sure that the code within the loop doesn't t
Another observation is that when I restart my application. Consumption
doesn't start till 5-6 minutes. In kafka consumer logs I see
ConsumerCoordinator.333 - Revoking previously assigned partitions [] for
group notifications-consumer
AbstractCoordinator:381 - (Re-)joining group notifications-consu
So here, when I received a message I run some business logic on it and try
to send some email. Now sometimes we have a promotional campaign running
millions of emails need to be delivered. For such numerous events is manual
commit good? Will it generate too much network activity if I commit a
singl
Hi Vincent,
Yes I reduced max.poll.records to get that same effect. I reduced it all
the way down to 5 records still I am seeing same error. What else can be
done? For one topic I can see that a single message processing is taking
about 20 seconds. So 5 of them will take 1 minute. So I set
session
Hi M. Manna,
Thanks I will try these settings.
On Thu, May 24, 2018 at 5:15 PM M. Manna wrote:
> Set your rebalance.backoff.ms=1 and zookeeper.session.timeout.ms=3
> in addition to what Manikumar said.
>
>
>
> On 24 May 2018 at 12:41, Shantanu Deshmukh wrote:
>
> > Hello,
> >
> > There
Manual commit is important where event consumption eventually leads to some
post-processing/database update/state change for your application. Without
doing all those, you cannot truly say that you have "Received" the message.
"Receiving" is interpreted differently and it's up to your target
applic
Hello Shantanu,
It is also important to consider your consumer code. You should not spend
to much time in between two calls to "poll" method. Otherwise, the consumer
not calling poll will be considered dead by the group, triggering a
rebalancing.
Best
On Thu, May 24, 2018 at 1:45 PM M. Manna wr
Hello everyone,
We have a 3 broker Kafka 0.10.1.0 cluster in production environment. Lately
we are seeing a lot of "auto commit failed because poll() spend too much
time processing" warning messages. Also, due to such events there is
constant fear of duplicate messages and the same does happen. To
Set your rebalance.backoff.ms=1 and zookeeper.session.timeout.ms=3
in addition to what Manikumar said.
On 24 May 2018 at 12:41, Shantanu Deshmukh wrote:
> Hello,
>
> There was a type in my first mail. session.timeout.ms is actually 6
> not
> 6000. So it is less than heartbeat.inter
Hello,
There was a type in my first mail. session.timeout.ms is actually 6 not
6000. So it is less than heartbeat.interval.ms.
On Thu, May 24, 2018 at 2:46 PM Manikumar wrote:
> heartbeat.interval.ms should be lower than session.timeout.ms.
>
> Check here:
> http://kafka.apache.org/0101/doc
heartbeat.interval.ms should be lower than session.timeout.ms.
Check here:
http://kafka.apache.org/0101/documentation.html#newconsumerconfigs
On Thu, May 24, 2018 at 2:39 PM, Shantanu Deshmukh
wrote:
> Someone please help me. I am suffering due to this issue since a long time
> and not finding
Someone please help me. I am suffering due to this issue since a long time
and not finding any solution.
On Wed, May 23, 2018 at 3:48 PM Shantanu Deshmukh
wrote:
> We have a 3 broker Kafka 0.10.0.1 cluster. There we have 3 topics with 10
> partitions each. We have an application which spawns thr
17 matches
Mail list logo