Does anyone fixed Producer TimeoutException problem ?

2019-07-02 Thread Shyam P
Hi, I am facing the below issue. org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 229 ms has passed since batch creation plus linger time I tried many producer configuration settings. more details below : https://stackoverflow.com/questions/56807188/how-to-fix-kafka-co

Fwd: Does anyone fixed Producer TimeoutException problem ?

2019-07-02 Thread Shyam P
Hi, I am facing the below issue. org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 229 ms has passed since batch creation plus linger time I tried many producer configuration settings. more details below : https://stackoverflow.com/questions/56807188/how-to-fix-kafka-co

Incjecting custom classes to Kafka / custom LoginModule or custom CallbackHandler

2019-07-02 Thread Filip Stysiak
Hello everyone, I am currently working on implementing simple authentication to a system that manages topics and ACLs in our Kafka. The plan is to use a simple login/password system, but instead of storing the user/password pairs directly in JAAS configuation file we intend to store it in a Postgr

Re: Does anyone fixed Producer TimeoutException problem ?

2019-07-02 Thread SenthilKumar K
Hi Shyam, We also faced `TimeoutException: Expiring 1 record(s)` issue in our Kafka Producer Client. As described here , first we tried increasing request timeout but that di

Re: Does anyone fixed Producer TimeoutException problem ?

2019-07-02 Thread Shyam P
Thanks a lot Senthil for quick reply. I am using kafka_2.11-2.1.1 . In your case Kafka Producer Client in One DataCenter and Kafka Broker in other DataCenter but in my case I installed Kafka on the same machine where Producer is running. i.e. currently I am in development mode , so everything no

Re: Kafka Streams - Getting exception org.apache.kafka.common.network.InvalidReceiveException exception in cloud

2019-07-02 Thread Vigneswaran Gunasekaran (vicky86)
Can anybody help me on this issue? Thanks, Vigneswaran From: "Vigneswaran Gunasekaran (vicky86)" Date: Monday, 1 July 2019 at 12:45 PM To: "users@kafka.apache.org" Subject: Re: Kafka Streams - Getting exception org.apache.kafka.common.network.InvalidReceiveException exception in cloud Hi Team

Re: Kafka Streams - Getting exception org.apache.kafka.common.network.InvalidReceiveException exception in cloud

2019-07-02 Thread Jason Turim
> > [2019-06-29 21:19:43,050] ERROR Exception while processing request from > 172.21.46.208:9092-172.21.4.208:38368-2446 (kafka.network.Processor) > org.apache.kafka.common.errors.InvalidRequestException: Error parsing > request header. Our best guess of the apiKey is: -32767 > Caused by: org.apach

Re: Kafka Streams - Getting exception org.apache.kafka.common.network.InvalidReceiveException exception in cloud

2019-07-02 Thread Vigneswaran Gunasekaran (vicky86)
Hi Jason, Thanks for your reply. I have no idea what the "client_id' field means. Because I am not having this field anywhere else. For the corrupted data, we are receiving the data properly and we are getting this exception intermediately. And after two to three days application stops working

Replica movement between log directories

2019-07-02 Thread Karolis Pocius
Not having much luck with replica movement between directories, so I'd appreciate if someone validated the steps that I'm taking: 1. Create topics to move json file (with a single topic) 2. Generate a candidate partition reassignment 3. Take the above and replace all instances of "any" with "/path

Re: Does anyone fixed Producer TimeoutException problem ?

2019-07-02 Thread SenthilKumar K
Does it happen to all partitions or only few partitions ? Can you make sure your local setup working fine ? Were you able to produce using console-producer ? Example : EVERE: Expiring 7 record(s) for topic-9{partition:9}: 30022 ms has passed since last append Expiring 9 record(s) for topic-2{parti

Re: Kafka streams (2.1.1) - org.rocksdb.RocksDBException:Too many open files

2019-07-02 Thread Sophie Blee-Goldman
This can also happen if you have any open iterators that you forget to close (for example using IQ), although that's probably not what's going on here since 3152 is certainly a lot of rocks instances for a single fs. There's no default number of open files per instance, since rocks creates new fil

Re: Kafka streams (2.1.1) - org.rocksdb.RocksDBException:Too many open files

2019-07-02 Thread emailtokir...@gmail.com
On 2019/06/28 23:29:16, John Roesler wrote: > Hey all, > > If you want to figure it out theoretically, if you print out the > topology description, you'll have some number of state stores listed > in there. The number of Rocks instances should just be > (#global_state_stores + > sum(#partitio

Re: Kafka streams (2.1.1) - org.rocksdb.RocksDBException:Too many open files

2019-07-02 Thread Thameem Ansari
Many places it is mentioned that closing the iterator is fixing the issue but this is true only if we use Processor APIs. But in DSL there is no iterator explicitly available and we are using wrapper methods like aggregate, map, groupBy, etc. Here is the snapshot of the issue with exact statis

Re: Kafka streams (2.1.1) - org.rocksdb.RocksDBException:Too many open files

2019-07-02 Thread Sophie Blee-Goldman
It sounds like rocksdb *is* honoring your configs -- the max.open.files config is an internal restriction that tells rocksdb how many open files it is allowed to have, so if that's set to -1 (infinite) it won't ever try to limit its open files and you may hit the OS limit. Think of it this way: if

Re: Kafka streams (2.1.1) - org.rocksdb.RocksDBException:Too many open files

2019-07-02 Thread Thameem Ansari
As I mentioned, tried setting the OS limit to 600K & 1Million on the shell and tried to start the application on the same shell but still the problem exists. Tried rebooting the laptop and the results are same. So, need a way to find out what exactly is causing this issue when we hit close to 42

Re: Kafka streams (2.1.1) - org.rocksdb.RocksDBException:Too many open files

2019-07-02 Thread emailtokir...@gmail.com
On 2019/07/03 05:46:45, Sophie Blee-Goldman wrote: > It sounds like rocksdb *is* honoring your configs -- the max.open.files > config is an internal restriction that tells rocksdb how many open files it > is allowed to have, so if that's set to -1 (infinite) it won't ever try to > limit its op

Re: Kafka streams (2.1.1) - org.rocksdb.RocksDBException:Too many open files

2019-07-02 Thread Sophie Blee-Goldman
How sure are you that the open file count never goes beyond 50K? Are those numbers just from a snapshot after it crashed? It's possible rocks is creating a large number of files just for a short period of time (maybe while compacting) that causes the open file count to spike and go back down. For