Decomissioning a broker

2015-07-30 Thread Andrew Otto
I’m sure this has been asked before, but I can’t seem to find the answer. I’m planning a Kafka cluster expansion and upgrade to 0.8.2.1. In doing so, I will be decommissioning a broker. I plan to remove this broker fully from the cluster, and then reinstall it and use it for a different purpos

Re: New Consumer API and Range Consumption with Fail-over

2015-07-30 Thread Jason Gustafson
Hi Bhavesh, I'm not totally sure I understand the expected behavior, but I think this can work. Instead of seeking to the start of the range before the poll loop, you should probably provide a ConsumerRebalanceCallback to get notifications when group assignment has changed (e.g. when one of your n

Re: Connection to zk shell on Kafka

2015-07-30 Thread Jiangjie Qin
This looks an issue to be fixed. I created KAFKA-2385 for this. Thanks, Jiangjie (Becket) Qin On Wed, Jul 29, 2015 at 10:33 AM, Chris Barlock wrote: > I'm a user of Kafka/ZooKeeper not one of its developers, so I can't give > you a technical explanation. I do agree that Kafka should ship the

Re: The meaning of the term "group"

2015-07-30 Thread Gwen Shapira
Can you point specifically to which offsets() function you are referring to? (i.e object or file name will help) I didn't find a method that takes group as a parameter in the consumer API... On Wed, Jul 29, 2015 at 2:37 PM, Keith Wiley wrote: > ?My understanding is that the group id indicated to

New Consumer API and Range Consumption with Fail-over

2015-07-30 Thread Bhavesh Mistry
Hello Kafka Dev Team, With new Consumer API redesign ( https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java ), is there a capability to consume given the topic and partition start/ end position. How would I achieve following use

Impact of Zookeeper Unavailability on Running Producers/Consumers

2015-07-30 Thread Mohit Gupta
Hi, We are using zookeeper for committing the consumer offsets. Zookeeper service has become unavailable due to disk full. The producers/consumers seems to be running fine ( in terms of numbers of messages consumed/produced per hour ). While we are fixing the issue, just want to know it's impact