Re: Kafka Producer - Producing to Multiple Topics

2020-08-21 Thread SenthilKumar K
it would be great if someone provides input(s)/hint :) thanks! --Senthil On Fri, Aug 21, 2020 at 3:28 PM SenthilKumar K wrote: > Updating the Kafka broker version: > > Kafka Version: 2.4.1 > > On Fri, Aug 21, 2020 at 3:21 PM SenthilKumar K > wrote: > >> Hi Team,

Re: kafka.common.StateChangeFailedException: Failed to elect leader for partition XXX under strategy PreferredReplicaPartitionLeaderElectionStrategy

2018-11-15 Thread SenthilKumar K
Adding Kafka Controller Log. [2018-11-15 11:19:23,985] ERROR [Controller id=4 epoch=8] Controller 4 epoch 8 failed to change state for partition XYXY-24 from OnlinePartition to OnlinePartition (state.change.logger) On Thu, Nov 15, 2018 at 5:12 PM SenthilKumar K wrote: > Hello Kafka Expe

kafka.common.StateChangeFailedException: Failed to elect leader for partition XXX under strategy PreferredReplicaPartitionLeaderElectionStrategy

2018-11-15 Thread SenthilKumar K
Hello Kafka Experts, We are facing StateChange Failed Exception on one of the broker. Out of 4 brokers, 3 were running fine and only one broker is throwing state change error. I dont find any error on Zookeeper Logs related to this error. Kafka Version : kafka_2.11-1.1.0 Any input would

Re: Kafka Producer Partition Key Selection

2018-08-29 Thread SenthilKumar K
; > In our case we use NULL as message Key to achieve even distribution in > producer. > With that we were able to achieve very even distribution with that. > Our Kafka client version is 0.10.1.0 and Kafka broker version is 1.1 > > > Thanks, > Gaurav > > On Wed, Aug

Kafka Producer Partition Key Selection

2018-08-29 Thread SenthilKumar K
Hello Experts, We want to distribute data across partitions in Kafka Cluster. Option 1 : Use Null Partition Key which can distribute data across paritions. Option 2 : Choose Key ( Random UUID ? ) which can help to distribute data 70-80%. I have seen below side effect on Confluence Page about se

Kafka Log deletion Problem

2018-02-02 Thread SenthilKumar K
Hello Experts , We have a Kafka Setup running for our analytics pipeline ...Below is the broker config .. max.message.bytes = 67108864 replica.fetch.max.bytes = 67108864 zookeeper.session.timeout.ms = 7000 replica.socket.timeout.ms = 3 offsets.commit.timeout.ms = 5000 request.timeout.ms = 4000

Kafka Consumer - org.apache.kafka.common.errors.TimeoutException: Failed to get offsets by times in 305000 ms

2017-10-11 Thread SenthilKumar K
Hi All , Recently we starting seeing Kafka Consumer error with Timeout . What could be the cause here ? Version : kafka_2.11-0.11.0.0 Consumer Properties: *bootstrap.servers, enable.auto.commit,auto.commit.interval.ms ,session.timeout.ms

Re: Different Data Types under same topic

2017-08-18 Thread SenthilKumar K
+ dev experts for inputs. --Senthil On Fri, Aug 18, 2017 at 9:15 PM, SenthilKumar K wrote: > Hi Users , We have planned to use Kafka for one of the use to collect data > from different server and persist into Message Bus .. > > Flow Would Be : > Source --> Kafka --&

Re: Handling 2 to 3 Million Events before Kafka

2017-06-22 Thread SenthilKumar K
Hi Barton - I think we can use Async Producer with Call Back api(s) to keep track on which event failed .. --Senthil On Thu, Jun 22, 2017 at 4:58 PM, SenthilKumar K wrote: > Thanks Barton.. I'll look into these .. > > On Thu, Jun 22, 2017 at 7:12 AM, Garrett Barton > wrote:

Re: Handling 2 to 3 Million Events before Kafka

2017-06-22 Thread SenthilKumar K
> On Wed, Jun 21, 2017 at 2:23 PM, Tauzell, Dave < > dave.tauz...@surescripts.com> wrote: > >> I’m not really familiar with Netty so I won’t be of much help. Maybe >> try posting on a Netty forum to see what they think? >> -Dave >> >> From: SenthilKuma

Re: Handling 2 to 3 Million Events before Kafka

2017-06-21 Thread SenthilKumar K
hub.com/smallnest/ > C1000K-Servers . > > > > It seems possible with the right sort of kafka producer tuning. > > > > -Dave > > > > *From:* SenthilKumar K [mailto:senthilec...@gmail.com] > *Sent:* Wednesday, June 21, 2017 8:55 AM > *To:* Tauzell, Dave > *Cc

Re: Handling 2 to 3 Million Events before Kafka

2017-06-21 Thread SenthilKumar K
rs > > Is the problem that web servers cannot send to Kafka fast enough or your > consumers cannot process messages off of kafka fast enough? > What is the average size of these messages? > > -Dave > > -Original Message- > From: SenthilKumar K [mailto:senthilec...@gma

Handling 2 to 3 Million Events before Kafka

2017-06-21 Thread SenthilKumar K
Hi Team , Sorry if this question is irrelevant to Kafka Group ... I have been trying to solve problem of handling 5 GB/sec ingestion. Kafka is really good candidate for us to handle this ingestion rate .. 100K machines > { Http Server (Jetty/Netty) } --> Kafka Cluster.. I see the problem

Kafka Time Based Index - Server Property ?

2017-05-30 Thread SenthilKumar K
Hi All , I've started exploring SearchMessagesByTimestamp https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index#KIP-33-Addatimebasedlogindex-Searchmessagebytimestamp . Kafka Producer produces the record with timestamp . When i try to search Timestamps few cases i

Re: Efficient way of Searching Messages By Timestamp - Kafka

2017-05-28 Thread SenthilKumar K
Hi Dev, It would be great if anybody share your experience on Search Message by Timestamp .. Cheer's, Senthil On May 28, 2017 2:08 AM, "SenthilKumar K" wrote: > Hi Team , Any help here Pls ? > > Cheers, > Senthil > > On Sat, May 27, 2017 at 8:25 PM, SenthilK

Re: Efficient way of Searching Messages By Timestamp - Kafka

2017-05-27 Thread SenthilKumar K
Hi Team , Any help here Pls ? Cheers, Senthil On Sat, May 27, 2017 at 8:25 PM, SenthilKumar K wrote: > Hello Kafka Developers , Users , > > We are exploring the SearchMessageByTimestamp feature in Kafka for our > use case . > > Use Case : Kafka will be realtime m

Efficient way of Searching Messages By Timestamp - Kafka

2017-05-27 Thread SenthilKumar K
Hello Kafka Developers , Users , We are exploring the SearchMessageByTimestamp feature in Kafka for our use case . Use Case : Kafka will be realtime message bus , users should be able to pull Logs by specifying start_date and end_date or Pull me last five minutes data etc ... I did POC

Re: Kafka Read Data from All Partition Using Key or Timestamp

2017-05-25 Thread SenthilKumar K
> > https://kafka.apache.org/0102/javadoc/org/apache/kafka/ > clients/consumer/Consumer.html#offsetsForTimes(java.util.Map) > > -hans > > > On May 25, 2017, at 6:39 AM, SenthilKumar K > wrote: > > > > I did an experiment on searching messages using timestamps .. > > >

Re: Kafka Read Data from All Partition Using Key or Timestamp

2017-05-25 Thread SenthilKumar K
( ... Pls advise me here! Cheers, Senthil On Thu, May 25, 2017 at 3:36 PM, SenthilKumar K wrote: > Thanks a lot Mayuresh. I will look into SearchMessageByTimestamp feature > in Kafka .. > > Cheers, > Senthil > > On Thu, May 25, 2017 at 1:12 PM, Mayuresh Gharat < &

Re: Kafka Read Data from All Partition Using Key or Timestamp

2017-05-25 Thread SenthilKumar K
ou the data. What I meant was it will not look at the timestamp specified > by you in the actual data payload. > > Thanks, > > Mayuresh > > On Thu, May 25, 2017 at 12:43 PM, SenthilKumar K > wrote: > >> Hello Dev Team, Pls let me know if any option to read data fro

Re: Kafka Read Data from All Partition Using Key or Timestamp

2017-05-25 Thread SenthilKumar K
Hello Dev Team, Pls let me know if any option to read data from Kafka (all partition ) using timestamp . Also can we set custom offset value to messages ? Cheers, Senthil On Wed, May 24, 2017 at 7:33 PM, SenthilKumar K wrote: > Hi All , We have been using Kafka for our Use Case which helps

Kafka Read Data from All Partition Using Key or Timestamp

2017-05-24 Thread SenthilKumar K
Hi All , We have been using Kafka for our Use Case which helps in delivering real time raw logs.. I have a requirement to fetch data from Kafka by using offset .. DataSet Example : {"access_date":"2017-05-24 13:57:45.044","format":"json","start":"1490296463.031"} {"access_date":"2017-05-24 13:57: