Hi,
Please not that REPLICATION_FACTOR_CONFIG is already set as three.
What is observed is that no mater what the producer request timeout is
increased to for one or two partitions it still gets timed out after that
time.
Streams side log simply has message like this
org.apache.kafka.common.errors
Tks Matthias, I add some offset log in ProcessorStateManager and
RocksDBStore.
The behaviour is just like you explaination.
This is restart instance1's log. The two active task really doing replay
work from position 2 of checkpoint file
[06:55,061] createStreamTask 0_0, partitions: [streams-wc-in
Congrats!
~ Joe Stein
On Fri, Jun 9, 2017 at 6:49 PM, Neha Narkhede wrote:
> Well deserved. Congratulations Damian!
>
> On Fri, Jun 9, 2017 at 1:34 PM Guozhang Wang wrote:
>
> > Hello all,
> >
> >
> > The PMC of Apache Kafka is pleased to announce that we have invited
> Damian
> > Guy as a co
Congrats Damian! Thanks for all your contributions.
On Fri, Jun 9, 2017 at 2:52 PM, Martin Gainty wrote:
> congratulations damian!
>
>
> Martin
>
>
>
> From: Gwen Shapira
> Sent: Friday, June 9, 2017 4:55 PM
> To: users@kafka.apache.org
> Cc: d...@kafka.apache.o
congratulations damian!
Martin
From: Gwen Shapira
Sent: Friday, June 9, 2017 4:55 PM
To: users@kafka.apache.org
Cc: d...@kafka.apache.org; priv...@kafka.apache.org
Subject: Re: [ANNOUNCE] New committer: Damian Guy
Congratulations :)
On Fri, Jun 9, 2017 at 1:4
Well deserved. Congratulations Damian!
On Fri, Jun 9, 2017 at 1:34 PM Guozhang Wang wrote:
> Hello all,
>
>
> The PMC of Apache Kafka is pleased to announce that we have invited Damian
> Guy as a committer to the project.
>
> Damian has made tremendous contributions to Kafka. He has not only
> c
Congratulations :)
On Fri, Jun 9, 2017 at 1:49 PM Vahid S Hashemian
wrote:
> Great news.
>
> Congrats Damian!
>
> --Vahid
>
>
>
> From: Guozhang Wang
> To: "d...@kafka.apache.org" ,
> "users@kafka.apache.org" ,
> "priv...@kafka.apache.org"
> Date: 06/09/2017 01:34 PM
> Subject:
Great news.
Congrats Damian!
--Vahid
From: Guozhang Wang
To: "d...@kafka.apache.org" ,
"users@kafka.apache.org" ,
"priv...@kafka.apache.org"
Date: 06/09/2017 01:34 PM
Subject:[ANNOUNCE] New committer: Damian Guy
Hello all,
The PMC of Apache Kafka is pleased to announce
Congrats Damian!
-James
> On Jun 9, 2017, at 1:34 PM, Guozhang Wang wrote:
>
> Hello all,
>
>
> The PMC of Apache Kafka is pleased to announce that we have invited Damian
> Guy as a committer to the project.
>
> Damian has made tremendous contributions to Kafka. He has not only
> contributed
Hello all,
The PMC of Apache Kafka is pleased to announce that we have invited Damian
Guy as a committer to the project.
Damian has made tremendous contributions to Kafka. He has not only
contributed a lot into the Streams api, but have also been involved in many
other areas like the producer an
Hi Sachin,
As Damian mentioned it'd be useful to see some logs from both broker and
streams.
One thing that comes to mind is whether your topics are replicated at all. You
could try setting the replication factor of streams topics (e.g., changelogs
and repartition topics) to 2 or 3 using Strea
Hi All,
We still intermittently get this error.
We had added the config
props.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);
and timeout as mentioned above is set as:
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 180);
So we increased from default 30 sec to 3 min to 30 minutes.
In course of our streaming application we discovered that it went into hung
state and upon further inspection we found that some of the partitions had
no leaders assigned.
Here is the description of topic:
# bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic
new-part-advice-key-tabl
Hi,
I `CommitFailedException` can still occur if an instance misses a
rebalance. I thinks, this is two different problems.
Having said this, Streams should recover from `CommitFailedException`
automatically by triggering another rebalance afterwards.
Nevertheless, we know that there is an issue
I see your point Eno, but truth is, on my real app I am getting
"CommitFailedException", even though I did not change "max.poll.interval.ms"
and it remains at Integer.MAX_VALUE.
I'm further investigating the origin of that exception. My current working
theory is that if a customer processor throws
Your observation is completely correct and this is also correct behavior.
Note, that instance1 and instance2 both also do have a local RocksDB
instance that holds the state. The checkpoint file basically tells
streams, what prefix of the changelog topic is already in RocksDB.
As Streams loads (no
Even without a state store the tasks themselves will get rebalanced.
So definitely you'll trigger the problem with the 1.2.3. steps you describe and
that is confirmed. The reason we increased "max.poll.interval.ms" to basically
infinite is to just avoid this problem.
Eno
> On 9 Jun 2017, at 07:
Hi,
Did you try logstash appender ?
Then you can configure layout as you are confortable
Regards
--Fiilippo
Il 08 Giu 2017 10:30 PM, "IT Consultant" <0binarybudd...@gmail.com> ha
scritto:
> Hi All,
>
> Has anybody tried to parse Kafka logs using Logstash ?
>
> If yes , can you please share pa
To help out I made the project that reproduces this issue publicly
available at https://github.com/Hartimer/kafka-stream-issue
On Thu, Jun 8, 2017 at 11:40 PM João Peixoto
wrote:
> I am now able to consistently reproduce this issue with a dummy project.
>
> 1. Set "max.poll.interval.ms" to a low
cc-ed users mailing list, as I think it's more appropriate for this thread.
Sanjay, if you what you're after is the following pattern:
http://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReplyJmsExample.html
then yes, you can do this in kafka. The outline would be similar to
20 matches
Mail list logo