Re: new producer failed with org.apache.kafka.common.errors.TimeoutException

2016-02-25 Thread Kris K
xes the issue for you? > > -Ewen > > On Mon, Feb 22, 2016 at 9:37 PM, Kris K wrote: > > > Hi All, > > > > I saw an issue today wherein the producers (new producers) started to > fail > > with org.apache.kafka.common.errors.TimeoutException: Failed t

new producer failed with org.apache.kafka.common.errors.TimeoutException

2016-02-22 Thread Kris K
Hi All, I saw an issue today wherein the producers (new producers) started to fail with org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 6 ms. This issue happened when we took down one of the 6 brokers (running version 0.8.2.1) for planned maintenance (graceful

Re: 0.9.0.0 release notes is opening download mirrors page

2015-12-01 Thread Kris K
Thanks Jun. Missed that part, my bad. Regards, Kris On Mon, Nov 30, 2015 at 4:17 PM, Jun Rao wrote: > Kris, > > It just points to the mirror site. If you click on one of the links, you > will see the release notes. > > Thanks, > > Jun > > On Mon, Nov 30,

0.9.0.0 release notes is opening download mirrors page

2015-11-30 Thread Kris K
Hi, Just noticed that the Release notes link of 0.9.0.0 is pointing to the download mirrors page. https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.0/RELEASE_NOTES.html Thanks, Kris K

Re: Java high level consumer providing duplicate messages when auto commit is off

2015-10-21 Thread Kris K
Hi Cliff, One other case I observed in my environment is - when there were gc pauses on one of our high level consumer in the group. Thanks, Kris On Wed, Oct 21, 2015 at 10:12 AM, Cliff Rhyne wrote: > Hi James, > > There are two scenarios we run: > > 1. Multiple partitions with one consumer pe

Re: error while high level consumer

2015-07-28 Thread Kris K
3:41 PM, Jiangjie Qin > wrote: > > > This is due to the zookeeper path storing the previous owner info hasn't > > been deleted at the moment. If the rebalance completes after retry, it > > should be fine. > > > > Jiangjie (Becket) Qin > > > > O

error while high level consumer

2015-07-24 Thread Kris K
Hi, I started seeing these errors in the logs continuously when I try to bring the High Level Consumer up. Please help. ZookeeperConsumerConnector [INFO] [XXX], waiting for the partition ownership to be deleted: 1 ZookeeperConsumerConnector [INFO] [XXX], end rebalancing consumer XXX

consumer property - zookeeper.sync.time.ms

2015-07-23 Thread Kris K
Hi, Can someone please throw some light on the consumer property zookeeper.sync.time.ms? What are the implications of decreasing it to lower than 2 secs ? I read the description from documentation but could not understand it completely. Thanks, Kris

consumer memory footprint

2015-07-16 Thread Kris K
Hi All, Is there a way to calculate the amount of memory used per thread in case of a high level consumer? I am particularly interested in calculating the memory required by a process running 10 high level consumer threads for 15 topics with max. file size set to 100 MB. Thanks, Kris

Re: high level consumer memory footprint

2015-06-25 Thread Kris K
from 100 partitions. Thanks, Kris On Tue, Jun 23, 2015 at 11:18 AM, Kris K wrote: > Hi, > > I was just wondering if there is any difference in the memory footprint of > a high level consumer when: > > 1. the consumer is live and continuously consuming messages with no > b

high level consumer memory footprint

2015-06-23 Thread Kris K
Hi, I was just wondering if there is any difference in the memory footprint of a high level consumer when: 1. the consumer is live and continuously consuming messages with no backlogs 2. when the consumer is down for quite some time and needs to be brought up to clear the backlog. My test case w

duplicate messages at consumer

2015-06-16 Thread Kris K
Hi, While testing message delivery using kafka, I realized that few duplicate messages got delivered by the consumers in the same consumer group (two consumers got the same message with few milli-seconds difference). However, I do not see any redundancy at the producer or broker. One more observat

Re: offset storage as kafka with zookeeper 3.4.6

2015-06-11 Thread Kris K
this topic? If you > are trying to consume it, then you will need to set > exclude.internal.topics=false in your consumer properties. You can > also check consumer mbeans that give the KafkaCommitRate or enable > trace logging in either the consumer or the broker's request logs to

offset storage as kafka with zookeeper 3.4.6

2015-06-11 Thread Kris K
I am trying to migrate the offset storage to kafka (3 brokers of version 0.8.2.1) using the consumer property offsets.storage=kafka. I noticed that a new topic, __consumer_offsets got created. But nothing is being written to this topic, while the consumer offsets continue to reside on zookeeper.