Error for Kafka / Topic has not leader and can't be listed with kaka-topic --describe

2016-03-08 Thread Tobias Adamson
Hi I posted this to IRC but maybe someone here has seen this before Hello I'm having some weird issues with kafka 0.9 and can't find anythign in jira I'm running Kafka inside Docker/Kubernets All works fine when I deploy But after a while I get the following in the publisher WARN: Error while fe

SimpleConsumerShell not honouring all options

2016-03-08 Thread Anishek Agarwal
Hello following doc @ https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-SimpleConsumerShell i tried to print messages using the command ./kafka-run-class.sh kafka.tools.SimpleConsumerShell —-max-messages 1 --no-wait-at-logend —-print-offsets --partition 17 --offset 7644

Producer keeps trying to reconnect if kafka is down & fills up log file

2016-03-08 Thread NISHANT BULCHANDANI
Hi All, We are using Kafka Producer in our web application. Things are fine when it is up, but in case it goes down, the log is over populated with the following error : 2016-03-07 12:37:42,813 WARN [kafka-producer-network-thread | producer-1] [Selector] [line : 276 ] - Error in I/O with /10.4

Re: 0.9.0.1 Kafka assign partition to new Consumer error

2016-03-08 Thread Ken Cheng
Hi Jason, Thanks for your detailed explain. Face to this situation, I wanna discuss more a little bit, I try two approach to avoid it in Kafka 0.9.0.1, and both work correctly. 1. Using subscribe(topics, listener) and implements onPartitionsAssigned(partitions) , it manually run consumer.commitS

Re: Log cleaner error

2016-03-08 Thread Rakesh Vidyadharan
Found the issue. My publisher was not assigning a key to all messages. On 07/03/2016 14:40, "Rakesh Vidyadharan" wrote: >Hello, > >We are using Kafka 0.8.2.2 and have modified most of our topics to use log >compaction and a shorter retention.ms equivalent to 24 hours for those topics. > We

Zookeeper sessions keep expiring…no heartbeats?

2016-03-08 Thread NISHANT BULCHANDANI
Hi, We are using Kafka high level consumer , and we are able to successfully consume messages but the zookeeper connections keep expiring and reestablishing. I am wondering why are there no heartbeats to keep the connections alive: Kafka Consumer Logs [localhost-startStop-1

Re: 0.9.0.1 Kafka assign partition to new Consumer error

2016-03-08 Thread Jason Gustafson
Hey Ken, Whether to use subscribe or assign depends mainly on whether you need to use consumer groups to distribute the topic load. If you use subscribe(), then the partitions for the subscribed topics will be divided among all consumers sharing the same groupId. With assign(), you have to provide

Re: Larger Size Error Message

2016-03-08 Thread Fang Wong
Thanks Guozhang! No I don't have a way to reproduce this issue. It randomly happens, I am changing the log level from INFO to trace to see if I can get the exact message what was sent when this happens. Could it also be some encoding issue or partial message related? Thanks, Fang On Mon, Mar 7,

Re: Larger Size Error Message

2016-03-08 Thread Guozhang Wang
I cannot think of an encoding or partial message issue at top of my head (browsed through 0.8.2.2 tickets, none of them seems related either). Guozhang On Tue, Mar 8, 2016 at 11:45 AM, Fang Wong wrote: > Thanks Guozhang! > > No I don't have a way to reproduce this issue. It randomly happens, I

What is the best way to ensure connectivity to Kafka without polling any messages

2016-03-08 Thread Anas Mosaad
Hi All, I am new to Kafka, I want to make sure Kafka server is available before I can do any polling to the messages. What I did is creating a consumer without subscription and try to access the partitions metadata for a specific topic. This trick works for the first time. If I take the brokers do

Re: What is the best way to ensure connectivity to Kafka without polling any messages

2016-03-08 Thread Gwen Shapira
What we normally do is consumer.poll(0). This connects to the broker, finds the consumer group, handles partition assignment, gets the metadata - and then doesn't stick around to actually give you any data. Pretty hacky, but we use this all over the place. Gwen On Tue, Mar 8, 2016 at 12:59 PM, A

seekToBeginning doesn't work without auto.offset.reset

2016-03-08 Thread Cody Koeninger
Using the 0.9 consumer, I would like to start consuming at the beginning or end, without specifying auto.offset.reset. This does not seem to be possible: val kafkaParams = Map[String, Object]( "bootstrap.servers" -> conf.getString("kafka.brokers"), "key.deserializer" -> classOf[St

Multiple topics to one consumer

2016-03-08 Thread 복영빈
Hi! I am quite new to Kafka and pub-sub systems. I am trying to make a single consumer to consume messages from multiple topics. Although I understand that one topic can be sent to multiple consumers via partitions, I could not find the part in the documentation that specifies that a single cons

kafka 0.9.0.1: FATAL exception on startup

2016-03-08 Thread Anatoly Deyneka
Hi, I need your advice how to start server in the next situation: It fails on startup with FATAL error: [2016-03-07 16:30:53,495] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) kafka.common.InvalidOffsetException: Attempt to append a

Re: Multiple topics to one consumer

2016-03-08 Thread Alex Loddengaard
Hi there, One consumer can indeed consume from multiple topics (and multiple partitions). For example, see here: http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#subscribe(java.util.List) Then, in your poll() loop, you can get the topic and partition from

Re: SimpleConsumerShell not honouring all options

2016-03-08 Thread Ewen Cheslack-Postava
The characters for the first dash in both --max-messages and --print-offsets doesn't look like a standard '-', is it possible those options simply aren't being parsed correctly? -Ewen On Tue, Mar 8, 2016 at 12:26 AM, Anishek Agarwal wrote: > Hello > > following doc @ > > https://cwiki.apache.or

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-08 Thread Guozhang Wang
Hi Cody, The problem with that code is in `seekToBeginning()` followed by `subscribe(topic)`. Since `subscribe` call is lazy evaluated, by the time `seekToBeginning()` is called no partition is assigned yet, and hence it is effectively an no-op. Try consumer.subscribe(topics) consumer.p

Poll Interval for Kafka Connect Source

2016-03-08 Thread Shiti Saxena
Hi, Is there a configuration to set poll Interval for a SourceTask using the Connect API? The JDBC Connector uses a custom property - poll.interval.ms but is there an internal property which can be used by different connectors? Thanks, Shiti

Re: SimpleConsumerShell not honouring all options

2016-03-08 Thread Anishek Agarwal
Thanks ! Stupid textEditor in Mac. I had removed that representation from "offset" and "partition" forgot the other one. On Wed, Mar 9, 2016 at 10:17 AM, Ewen Cheslack-Postava wrote: > The characters for the first dash in both --max-messages and > --print-offsets doesn't look like a standard '-

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-08 Thread Cody Koeninger
That suggestion doesn't work, for pretty much the same reason - at the time poll is first called, there is no reset policy and no committed offset, so NoOffsetForPartitionException is thrown I feel like the underlying problem isn't so much that seekToEnd needs special case behavior. It's more tha