RE: Kafka partitions are piled up for Consumer

2017-07-06 Thread Ghosh, Achintya (Contractor)
for Consumer I am observing the same with our .10.2 cluster where consumers are hanged for those partitions where the current offset's data gets deleted due to retention. It looks like a bug to me. Thanks! On Jul 6, 2017 9:23 AM, "Ghosh, Achintya (Contractor)" < achintya

Kafka partitions are piled up for Consumer

2017-07-06 Thread Ghosh, Achintya (Contractor)
Hi, If we have some high response in backend( let's say database high response), the few of the partitions messages are piled up. Once the backend gets normal still those partitions are not processing any message and it creates a huge lag. Any idea why it happened? We thought once the backend g

Kafka shutdown gracefully

2017-07-05 Thread Ghosh, Achintya (Contractor)
Hi team, What is the command to shutdown kafka server gracefully instead of using 'kill -9 PID'? If we use bin/kafka-server-stop.sh it shows "No kafka server to stop" but the service actually running and I see the PID by using "ps -ef|grep kafka" Thanks Achintya

Larger payload size

2017-06-08 Thread Ghosh, Achintya (Contractor)
Hi there, We observed if our payload size is larger we see "Failed to send; nested exception is org.apache.kafka.common.errors.RecordTooLargeException" execption so we changed the settings from 1 MB to 5 MB for both Producer and Consumer end. Server.properties: message.max.bytes=5242880 replic

Current offset for partition out of range; reset offset

2017-02-22 Thread Ghosh, Achintya (Contractor)
Hi All, One of the partitions showing the huge lag(21K) and I see the below error in kafkaserver.out log of one of the kafka nodes. Current offset 43294 for partition [PROD_TASK_TOPIC_120,10] out of range; reset offset to 43293 (kafka.server.ReplicaFetcherThread) What is the quick solution,

RE: Messages are lost

2017-01-24 Thread Ghosh, Achintya (Contractor)
@kafka.apache.org Subject: Re: Messages are lost Make sure you don't have an orphaned process holding onto the various kafka/zk folders. If it won't respond and you can't kill it then this might have happened. On Tue, Jan 24, 2017 at 6:46 AM, Ghosh, Achintya (Contractor) < achintya

RE: Messages are lost

2017-01-24 Thread Ghosh, Achintya (Contractor)
Can anyone please answer this? Thanks Achintya -Original Message- From: Ghosh, Achintya (Contractor) [mailto:achintya_gh...@comcast.com] Sent: Monday, January 23, 2017 1:51 PM To: users@kafka.apache.org Subject: RE: Messages are lost Version 0.10 and I don’t have the thread dump but

RE: Messages are lost

2017-01-23 Thread Ghosh, Achintya (Contractor)
What version of kafka have you deployed? Can you post a thread dump of the hung broker? On Fri, Jan 20, 2017 at 12:14 PM, Ghosh, Achintya (Contractor) < achintya_gh...@comcast.com> wrote: > Hi there, > > I see the below exception in one of my node's log( cluster with 3 > n

RE: Messages are lost

2017-01-23 Thread Ghosh, Achintya (Contractor)
Can anyone please update on this? Thanks Achintya -Original Message- From: Ghosh, Achintya (Contractor) [mailto:achintya_gh...@comcast.com] Sent: Friday, January 20, 2017 3:15 PM To: users@kafka.apache.org Subject: Messages are lost Hi there, I see the below exception in one of my

Messages are lost

2017-01-20 Thread Ghosh, Achintya (Contractor)
Hi there, I see the below exception in one of my node's log( cluster with 3 nodes) and then the node is stopped to responding(it's hung state , I mean if I do ps-ef|grep kafka , I see the Kafka process but it is not responding) and we lost around 100 messages: 1. What could be the reas

log.retention attribute not working

2016-12-14 Thread Ghosh, Achintya (Contractor)
Hi there, Any idea why log.retention attribute is not working? We kept log.retention.hours=6 in server.properties but we see old data are not getting deleted. We see Dec 9th data/log files are still there. We are running this in production boxes and if it does not delete the old files our stor

RE: Kafka consumers are not equally distributed

2016-11-28 Thread Ghosh, Achintya (Contractor)
it will not be evenly distributed. Guozhang On Fri, Nov 25, 2016 at 9:12 AM, Ghosh, Achintya (Contractor) < achintya_gh...@comcast.com> wrote: > So what is the option to messages make it equally distributed from > that point? I mean is any other option to make the consumers

RE: Kafka consumers are not equally distributed

2016-11-25 Thread Ghosh, Achintya (Contractor)
the partitions are sitting > idle and some of are overloaded", do you mean that some partitions > does not have new data coming in and others keep getting high traffic > producing to it that the consumer cannot keep up? In this case it is > no the consumer's issue, bu

RE: Kafka consumers are not equally distributed

2016-11-25 Thread Ghosh, Achintya (Contractor)
r cannot keep up? In this case it is no the consumer's issue, but the producer not producing in a balanced manner. Guozhang On Thu, Nov 24, 2016 at 7:45 PM, Ghosh, Achintya (Contractor) < achintya_gh...@comcast.com> wrote: > Java consumer. 0.9.1 > > Thanks > Achintya

RE: Kafka consumers are not equally distributed

2016-11-24 Thread Ghosh, Achintya (Contractor)
? Is it Scala or Java consumers? Guozhang On Wed, Nov 23, 2016 at 6:38 AM, Ghosh, Achintya (Contractor) < achintya_gh...@comcast.com> wrote: > No, that is not the reason. Initially all the partitions were assigned > the messages and those were processed very fast and sit idle e

RE: Kafka consumers are not equally distributed

2016-11-23 Thread Ghosh, Achintya (Contractor)
partition key ? On Wed, Nov 23, 2016 at 12:33 AM, Ghosh, Achintya (Contractor) < achintya_gh...@comcast.com> wrote: > Hi there, > > We are doing the load test in Kafka with 25tps and first 9 hours it > went fine almost 80K/hr messages were processed after that we see a > lot o

Kafka consumers are not equally distributed

2016-11-22 Thread Ghosh, Achintya (Contractor)
Hi there, We are doing the load test in Kafka with 25tps and first 9 hours it went fine almost 80K/hr messages were processed after that we see a lot of lags and we stopped the incoming load. Currently we see 15K/hr messages are processing. We have 40 consumer instances with concurrency 4 and

RE: Kafka 0.10 Monitoring tool

2016-11-16 Thread Ghosh, Achintya (Contractor)
; purpose, or store, or copy the information in any medium. Please also > destroy and delete the message from your computer. > > > On 15 November 2016 at 15:30, Ghosh, Achintya (Contractor) < > achintya_gh...@comcast.com> wrote: > > > Yes, we tried with this command bu

RE: Kafka 0.10 Monitoring tool

2016-11-15 Thread Ghosh, Achintya (Contractor)
iately, and do not disclose the contents to another person, use it for any purpose, or store, or copy the information in any medium. Please also destroy and delete the message from your computer. On 15 November 2016 at 15:30, Ghosh, Achintya (Contractor) < achintya_gh...@comcast.com>

RE: Kafka 0.10 Monitoring tool

2016-11-15 Thread Ghosh, Achintya (Contractor)
d recipient, please notify the sender immediately, and do not disclose the contents to another person, use it for any purpose, or store, or copy the information in any medium. Please also destroy and delete the message from your computer. On 15 November 2016 at 14:28, Ghosh, Achintya (Contractor) <

RE: Kafka 0.10 Monitoring tool

2016-11-15 Thread Ghosh, Achintya (Contractor)
arch Consulting Support Training - http://sematext.com/ On Mon, Nov 14, 2016 at 5:16 PM, Ghosh, Achintya (Contractor) < achintya_gh...@comcast.com> wrote: > Hi there, > What is the best open source tool for Kafka monitoring mainly to check > the offset lag. We tried the follo

Kafka 0.10 Monitoring tool

2016-11-14 Thread Ghosh, Achintya (Contractor)
Hi there, What is the best open source tool for Kafka monitoring mainly to check the offset lag. We tried the following tools: 1. Burrow 2. KafkaOffsetMonitor 3. Prometheus and Grafana 4. Kafka Manager But nothing is working perfectly. Please help us on this. Thanks

SendFailedException

2016-09-26 Thread Ghosh, Achintya (Contractor)
Hi there, Can anyone please help us as we are getting the SendFailedException when Kafka consumer is starting and not able to consume any message? Thanks Achintya

Kafka duplicate offset at Consumer

2016-09-20 Thread Ghosh, Achintya (Contractor)
Hi there, I see a lot of same offset value kafka consumer receives hence it creates a lot of duplicate messages. What could be the reason and how we can solve this issue? Thanks Achintya

RE: Kafka usecase

2016-09-19 Thread Ghosh, Achintya (Contractor)
faster than they are consumed, you will get a backlog of messages. In that case, you may need to grow your cluster so that more messages are processed in parallel. Best regards / Mit freundlichen Grüßen / Sincères salutations M. Lohith Samaga -Original Message- From: Ghosh, Achintya

Kafka usecase

2016-09-18 Thread Ghosh, Achintya (Contractor)
Hi there, We have an usecase where we do a lot of business logic to process each message and sometime it takes 1-2 sec, so will be Kafka fit in our usecase? Thanks Achintya

RE: Kafka consumers unable to process message

2016-08-31 Thread Ghosh, Achintya (Contractor)
replica fetcher threads on the broker failing which makes perfect sense since some of the partitions were bound to have leaders in the failed datacenter. I'd actually like to see the consumer logs at DEBUG level if possible. Thanks, Jason On Wed, Aug 31, 2016 at 7:48 PM, Ghosh, Achintya (Contra

RE: Kafka consumers unable to process message

2016-08-31 Thread Ghosh, Achintya (Contractor)
r datacenter's zookeeper server? I tried with > to increate the zookeeper session timeout and connection time out but no luck. > > Please help on this. > Thanks > Achintya > > > -Original Message- > From: Jason Gustafson [mailto:ja...@confluent.io] > Sent: Wedne

RE: Kafka consumers unable to process message

2016-08-31 Thread Ghosh, Achintya (Contractor)
n time out but no luck. Please help on this. Thanks Achintya -Original Message- From: Jason Gustafson [mailto:ja...@confluent.io] Sent: Wednesday, August 31, 2016 4:05 PM To: users@kafka.apache.org Cc: d...@kafka.apache.org Subject: Re: Kafka consumers unable to process message Hi A

Kafka consumers unable to process message

2016-08-31 Thread Ghosh, Achintya (Contractor)
Hi there, Kafka consumer gets stuck at consumer.poll() method if my current datacenter is down and replicated messages are in remote datacenter. How to solve that issue? Thanks Achintya

Kafka unable to process message

2016-08-30 Thread Ghosh, Achintya (Contractor)
Hi there, What does the below error mean and how to avoid this? I see this error one of the kafkaServer.out file when other broker is down. And not able to process any message as we see o.a.k.c.c.i.AbstractCoordinator - Issuing group metadata request to broker 5 from application log [2016-08

RE: Batch Expired

2016-08-29 Thread Ghosh, Achintya (Contractor)
t is a pretty big timeout. However, I noticed if there is no connections made to broker, you can still get batch expiry. On Fri, Aug 26, 2016 at 6:32 AM, Ghosh, Achintya (Contractor) < achintya_gh...@comcast.com> wrote: > Hi there, > > What is the recommended Producer setting for Pro

Batch Expired

2016-08-26 Thread Ghosh, Achintya (Contractor)
Hi there, What is the recommended Producer setting for Producer as I see a lot of Batch Expired exception even though I put request.timeout=6. Producer settings: acks=1 retries=3 batch.size=16384 linger.ms=5 buffer.memory=33554432 request.timeout.ms=6 timeout.ms=6 Thanks Achintya

Kafka Mirror maker duplicate issue

2016-08-12 Thread Ghosh, Achintya (Contractor)
Hi there, I created a broker as stand by using Kafka Mirror maker but same messages gets consumed by both Source broker and mirror broker. Ex: I send 1000 messages let's say offset value 1 to 1000 and consumed 500 messages from the source broker. Now my broker goes down and want to read rest

RE: Kafka consumer getting duplicate message

2016-08-10 Thread Ghosh, Achintya (Contractor)
Can anyone please check this one? Thanks Achintya -Original Message- From: Ghosh, Achintya (Contractor) Sent: Monday, August 08, 2016 9:44 AM To: users@kafka.apache.org Cc: d...@kafka.apache.org Subject: RE: Kafka consumer getting duplicate message Thank you , Ewen for your response

RE: Kafka consumer getting duplicate message

2016-08-08 Thread Ghosh, Achintya (Contractor)
that probably means shutdown/failover is not being handled correctly. If you can provide more info about your setup, we might be able to suggest tweaks that will avoid these situations. -Ewen On Fri, Aug 5, 2016 at 8:15 AM, Ghosh, Achintya (Contractor) < achintya_gh...@comcast.com> wrot

Kafka consumer getting duplicate message

2016-08-05 Thread Ghosh, Achintya (Contractor)
Hi there, We are using Kafka 1.0.0.M2 with Spring and we see a lot of duplicate message is getting received by the Listener onMessage() method . We configured : enable.auto.commit=false session.timeout.ms=15000 factory.getContainerProperties().setSyncCommits(true); factory.setConcurrency(5); So