Re: Producer Timeout issue in kafka streams task

2021-11-01 Thread Luke Chen
Hi Pushkar, In addition to Matthias and Guozhang's answer and clear explanation, I think there's still one thing you should focus on: > I could see that 2 of the 3 brokers restarted at the same time. It's a total 3 brokers cluster, and suddenly, 2 of them are broken. You should try to find out th

Re: Producer Timeout issue in kafka streams task

2021-11-01 Thread Matthias J. Sax
The `Producer#send()` call is actually not covered by the KIP because it may result in data loss if we try to handle the timeout directly. -- Kafka Streams does not have a copy of the data in the producer's send buffer and thus we cannot retry the `send()`. -- Instead, it's necessary to re-proc

Re: Producer Timeout issue in kafka streams task

2021-11-01 Thread Matthias J. Sax
As the error message suggests, you can increase `max.block.ms` for this case: If a broker is down, it may take some time for the producer to fail over to a different broker (before the producer can fail over, the broker must elect a new partition leader, and only afterward can inform the produc

Re: Producer Timeout issue in kafka streams task

2021-11-01 Thread Guozhang Wang
Hello Pushkar, I'm assuming you have the same Kafka version (2.5.1) at the Streams client side here: in those old versions, Kafka Streams relies on the embedded Producer clients to handle timeouts, which requires users to correctly configure such values. In newer version (2.8+) We have made Kafka

Producer Timeout issue in kafka streams task

2021-10-31 Thread Pushkar Deole
Hi All, I am getting below issue in streams application. Kafka cluster is a 3 broker cluster (v 2.5.1) and I could see that 2 of the 3 brokers restarted at the same time when below exception occurred in streams application so I can relate below exception to those brokers restarts. However, what is

Re: Avro DeSerializeation Issue in Kafka Streams

2020-05-05 Thread Nagendra Korrapati
When specific.avro.reader is set to true Deserializer tries to create the instance of the Class. The class name is formed by reading the schema (writer schema) from schema registry and concatenating the namespace and record name. It is trying to create that instance and it is not found in the c

Avro DeSerializeation Issue in Kafka Streams

2020-05-05 Thread Suresh Chidambaram
Hi All, Currently, I'm working on a usecase wherein I have to deserialie an Avro object and convert to some other format of Avro. Below is the flow. DB -> Source Topic(Avro format) -> Stream Processor -> Target Topic (Avro as nested object). When I deserialize the message from the Source Topic,

Re: Issue in Kafka 2.0.0 ?

2018-08-20 Thread Ted Yu
Sent out a PR #5543 which fixes the reported bug, with StreamToTableJoinScalaIntegrationTestImplicitSerdes.testShouldCountClicksPerRegion modified adding the filter methods. FYI On Mon, Aug 20, 2018 at 5:26 PM Ted Yu wrote: > Thanks for pointing me to that PR. > > I applied the PR locally but

Re: Issue in Kafka 2.0.0 ?

2018-08-20 Thread Ted Yu
Hi, I am aware that more than one method from KTable.scala have this issue. Once I find a solution, I will apply the fix to the methods you listed. Cheers On Mon, Aug 20, 2018 at 5:23 PM Druhin Sagar Goel wrote: > Thanks a lot Ted! > > FYI - The issue is not limited to the > org.apache.kafka.s

Re: Issue in Kafka 2.0.0 ?

2018-08-20 Thread Ted Yu
Thanks for pointing me to that PR. I applied the PR locally but still got: org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes > testShouldCountClicksPerRegion FAILED java.lang.StackOverflowError I can go over that PR to see what can be referenced for solving t

Re: Issue in Kafka 2.0.0 ?

2018-08-20 Thread Druhin Sagar Goel
Thanks a lot Ted! FYI - The issue is not limited to the org.apache.kafka.streams.scala.KTable.filter. It also happens with org.apache.kafka.streams.scala.KTable.filterNot, org.apache.kafka.streams.scala.KStream.foreach and org.apache.kafka.streams.scala.KStream.peek. - Druhin On August 20,

Re: Issue in Kafka 2.0.0 ?

2018-08-20 Thread Guozhang Wang
Is this related to the fix https://github.com/apache/kafka/pull/5502 that is currently being worked on? Guozhang On Mon, Aug 20, 2018 at 5:19 PM, Matthias J. Sax wrote: > Thanks for reporting and for creating the ticket! > > -Matthias > > On 8/20/18 5:17 PM, Ted Yu wrote: > > I was able to rep

Re: Issue in Kafka 2.0.0 ?

2018-08-20 Thread Matthias J. Sax
Thanks for reporting and for creating the ticket! -Matthias On 8/20/18 5:17 PM, Ted Yu wrote: > I was able to reproduce what you saw with modification > to StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala > I have logged KAFKA-7316 and am looking for a fix. > > FYI > > On Mon, Aug 20,

Re: Issue in Kafka 2.0.0 ?

2018-08-20 Thread Ted Yu
I was able to reproduce what you saw with modification to StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala I have logged KAFKA-7316 and am looking for a fix. FYI On Mon, Aug 20, 2018 at 1:39 PM Druhin Sagar Goel wrote: > Isn’t that a bug then? Or can I fix my code somehow? > > > > On A

Re: Issue in Kafka 2.0.0 ?

2018-08-20 Thread Druhin Sagar Goel
Isn’t that a bug then? Or can I fix my code somehow? On August 20, 2018 at 1:30:42 PM, Ted Yu (yuzhih...@gmail.com) wrote: I think what happened in your use case was that the following implicit from ImplicitConversions.scala kept wrapping the resultant KTable from f

Re: Issue in Kafka 2.0.0 ?

2018-08-20 Thread Ted Yu
I think what happened in your use case was that the following implicit from ImplicitConversions.scala kept wrapping the resultant KTable from filter(): implicit def wrapKTable[K, V](inner: KTableJ[K, V]): KTable[K, V] = leading to stack overflow. Cheers On Mon, Aug 20, 2018 at 12:50 PM Druhin

Issue in Kafka 2.0.0 ?

2018-08-20 Thread Druhin Sagar Goel
Hi, I’m using the org.kafka.streams.scala that was released with version 2.0.0. I’m getting a StackOverflowError as follows: java.lang.StackOverflowError at org.apache.kafka.streams.scala.kstream.KTable.filter(KTable.scala:49) at org.apache.kafka.streams.scala.kstream.KTable.filter(KTable.scala:

Re: I have issue in Kafka 2.0

2018-08-13 Thread Steve Tian
Have you checked the javadoc of KafkaConsumer? On Mon, Aug 13, 2018, 11:10 PM Kailas Biradar wrote: > I have issue more time this ConcurrentModificationException because > KafkaConsumer is not safe for multi-threaded access > > -- > kailas >

I have issue in Kafka 2.0

2018-08-13 Thread Kailas Biradar
I have issue more time this ConcurrentModificationException because KafkaConsumer is not safe for multi-threaded access -- kailas

Re: Issue in kafka

2018-03-07 Thread Fabio Yamada
Hi Subash, First try to check if your zookeeper has your kafka broker registered: /opt/zookeeper/bin/zkCli.sh -server localhost:2181 <<< "ls /brokers/ids" The output should list your brokers. In your case [0]. In my dev env I have 3 brokers. [zk: localhost:2181(CONNECTED) 0] ls /brokers/ids [0

Issue in kafka

2018-03-07 Thread Subash Konar
Hi team, I am new to Kafka and I am trying to learn the basics. I have issued command after creating topic test in a single node cluster- /opt/cloudera/parcels/KAFKA-2.1.1-1.2.1.1.p0.18/lib/kafka/bin/kafka-console-producer.sh --broker-list :9092 --topic test And then I pass message *This Is D

Re: Issue in Kafka running for few days

2017-04-30 Thread Michal Borowiecki
Ah, yes, you're right. I miss-read it. My bad. Apologies. Michal On 30/04/17 16:02, Svante Karlsson wrote: @michal My interpretation is that he's running 2 instances of zookeeper - not 6. (1 on the "4 broker machine" and one on the other) I'm not sure where that leaves you in zookeeper lan

Re: Issue in Kafka running for few days

2017-04-30 Thread Svante Karlsson
@michal My interpretation is that he's running 2 instances of zookeeper - not 6. (1 on the "4 broker machine" and one on the other) I'm not sure where that leaves you in zookeeper land - ie if you happen to have a timeout between the two zookeepers will you be out of service or will you have a sp

Re: Issue in Kafka running for few days

2017-04-30 Thread Michal Borowiecki
Hi Jan, Correct. As I said before it's not common or recommended practice to run an even number, and I wouldn't recommend it myself. I hope it didn't sound as if I did. However, I don't see how this would cause the issue at hand unless at least 3 out of the 6 zookeepers died, but that could

Re: Issue in Kafka running for few days

2017-04-30 Thread jan
I looked this up yesterday when I read the grandparent, as my old company ran two and I needed to know. Your link is a bit ambiguous but it has a link to the zookeeper Getting Started guide which says this: " For replicated mode, a minimum of three servers are required, and it is strongly recomme

Re: Issue in Kafka running for few days

2017-04-30 Thread Michal Borowiecki
Svante, I don't share your opinion. Having an even number of zookeepers is not a problem in itself, it simply means you don't get any better resilience than if you had one fewer instance. Yes, it's not common or recommended practice, but you are allowed to have an even number of zookeepers and

Re: Issue in Kafka running for few days

2017-04-26 Thread Svante Karlsson
You are not supposed to run an even number of zookeepers. Fix that first On Apr 26, 2017 20:59, "Abhit Kalsotra" wrote: > Any pointers please > > > Abhi > > On Wed, Apr 26, 2017 at 11:03 PM, Abhit Kalsotra > wrote: > > > Hi * > > > > My kafka setup > > > > > > **OS: Windows Machine*6 broker

Re: Issue in Kafka running for few days

2017-04-26 Thread Abhit Kalsotra
Any pointers please Abhi On Wed, Apr 26, 2017 at 11:03 PM, Abhit Kalsotra wrote: > Hi * > > My kafka setup > > > **OS: Windows Machine*6 broker nodes , 4 on one Machine and 2 on other > Machine* > > **ZK instance on (4 broker nodes Machine) and another ZK on (2 broker > nodes machine)* > *

Issue in Kafka running for few days

2017-04-26 Thread Abhit Kalsotra
Hi * My kafka setup **OS: Windows Machine*6 broker nodes , 4 on one Machine and 2 on other Machine* **ZK instance on (4 broker nodes Machine) and another ZK on (2 broker nodes machine)* ** 2 Topics with partition size = 50 and replication factor = 3* I am producing on an average of around 500

Re: Strange issue in Kafka Windows where Log directory getting increased day by day

2016-11-27 Thread Abhit Kalsotra
Any pointers guys ? Abhi On Sun, Nov 27, 2016 at 11:14 AM, Abhit Kalsotra wrote: > Dear * > > I am facing some issue in Kafka Windows, the Log directories size are keep > on increasing day by day.. > > Currently the traffic on my kafka set up is minimal , but if this log >

Strange issue in Kafka Windows where Log directory getting increased day by day

2016-11-26 Thread Abhit Kalsotra
Dear * I am facing some issue in Kafka Windows, the Log directories size are keep on increasing day by day.. Currently the traffic on my kafka set up is minimal , but if this log directory size does not get fixed then its going to be serious issue... My Kafka SetUp - 2 Windows Server

Re: Regarding issue in Kafka-0.8.2.2.3

2016-02-08 Thread allen chan
I export my JMX_PORT setting in the kafka-server-start.sh script and have not run into any issues yet. On Mon, Feb 8, 2016 at 9:01 AM, Manikumar Reddy wrote: > kafka scripts uses "kafka-run-class.sh" script to set environment variables > and run classes. So if you set any environment variable >

Re: Regarding issue in Kafka-0.8.2.2.3

2016-02-08 Thread Manikumar Reddy
kafka scripts uses "kafka-run-class.sh" script to set environment variables and run classes. So if you set any environment variable in"kafka-run-class.sh" script, then it will be applicable to all the scripts. So try to set different JMX_PORT in kafka-topics.sh. On Mon, Feb 8, 2016 at 9:24 PM, Shi

Re: Regarding issue in Kafka-0.8.2.2.3

2016-02-08 Thread Shishir Shivhare
Hi, In order to get metrics through JMX, we have exported JMX_PORT=8099. But when we are trying to delete the topics from kafka, we are getting following issue: Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 8099; nested exception is: java.net

Regarding issue in Kafka-0.8.2.2.3

2016-02-08 Thread Shishir Shivhare
Hi, In order to get metrics through JMX, we have exported JMX_PORT=8099. But when we are trying to delete the topics from kafka, we are getting following issue: Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 1234; nested exception is: java.net