Re: Frequent ZK session timeouts

2016-01-12 Thread Dillian Murphey
Last comment, I upgraded to java 1.7 and restarted kafka. It's now stable. But I have not poked at it. I'm just letting it sit for now. Could this have been somehow related to the problem, but just not apparent in the logs, that I was running 1.6 with 0.8.2.1? On Tue, Jan 12, 2016 at 11:19 PM, Di

Re: Frequent ZK session timeouts

2016-01-12 Thread Dillian Murphey
[2016-01-12 22:16:59,629 ] TRACE [Controller 925537]: leader imbalance ratio for broker 925537 is 0.00 (kafka.controller.KafkaController) [2016-01-12

Re: 409 Conflict

2016-01-12 Thread Ewen Cheslack-Postava
Is the consumer registration failing, or the subsequent calls to read from the topic? From the error, it sounds like the latter -- a conflict during registration should generate a 40902 error. Can you give more info about the sequence of requests that causes the error? The set of commands you gave

Re: Controlled shutdown not relinquishing leadership of all partitions

2016-01-12 Thread Ján Koščo
Not sure, but should combination of auto.leader.rebalance.enable=true and controlled.shutdown.enable=true sort this out for you? 2016-01-13 1:13 GMT+01:00 Scott Reynolds : > we use 0.9.0.0 and it is working fine. Not all the features work and a few > things make a few assumptions about how zookee

Re: Frequent ZK session timeouts

2016-01-12 Thread Mayuresh Gharat
Can you paste the logs? Thanks, Mayuresh On Tue, Jan 12, 2016 at 4:58 PM, Dillian Murphey wrote: > Possibly running more stable with 1.7 JVM. > > Can someone explain the Zookeeper session? SHould it never expire, unless > the broker becomes unresponsive? I set a massive timeout value in the

Re: fallout from upgrading to the new Kafka producers

2016-01-12 Thread Rajiv Kurian
Thanks Guozhan. I have upgraded to 0.9.0 now. Are there are any other producer changes to be aware of? My understanding is there were no big producer changes made from 0.8.2 to 0.9.0. Thanks, Rajiv On Mon, Jan 11, 2016 at 5:52 PM, Guozhang Wang wrote: > Hi Rajiv, > > This warning could be ignor

Re: Frequent ZK session timeouts

2016-01-12 Thread Dillian Murphey
Possibly running more stable with 1.7 JVM. Can someone explain the Zookeeper session? SHould it never expire, unless the broker becomes unresponsive? I set a massive timeout value in the broker config far beyond the amount of time I see the zk expiration. Is this entirely on the kafka side, or c

Re: Controlled shutdown not relinquishing leadership of all partitions

2016-01-12 Thread Scott Reynolds
we use 0.9.0.0 and it is working fine. Not all the features work and a few things make a few assumptions about how zookeeper is used. But as a tool for provisioning, expanding and failure recovery it is working fine so far. *knocks on wood* On Tue, Jan 12, 2016 at 4:08 PM, Luke Steensen < luke.st

Re: Controlled shutdown not relinquishing leadership of all partitions

2016-01-12 Thread Luke Steensen
Ah, that's a good idea. Do you know if kafka-manager works with kafka 0.9 by chance? That would be a nice improvement of the cli tools. Thanks, Luke On Tue, Jan 12, 2016 at 4:53 PM, Scott Reynolds wrote: > Luke, > > We practice the same immutable pattern on AWS. To decommission a broker, we >

RE: 409 Conflict

2016-01-12 Thread Heath Ivie
Additional Info: When I enter the curl commands from the 2.0 document, I get the same error: $ curl -X POST -H "Content-Type: application/vnd.kafka.json.v1+json" --data '{"records":[{"value":{"foo":"bar"}}]}' "http://10.1.30.48:8082/topics/jsontest"; $ curl -X POST -H "Content-Type: application

409 Conflict

2016-01-12 Thread Heath Ivie
Hi , I am running into an issue where I cannot register new consumers. The server is consistently return error code 40901: "Consumer cannot subscribe the the specified target because it has already subscribed to other topics". I am using different groups, different topics and different names bu

Frequent ZK session timeouts

2016-01-12 Thread Dillian Murphey
Our 2 node kafka cluster has become unhealthy. We're running zookeeper as a 3 node system, which very light load. What seems to be happening is in the controller log we get a ZK session expire message, and in the process of re-assigning the leader for the partitions (if I'm understanding this rig

Re: Controlled shutdown not relinquishing leadership of all partitions

2016-01-12 Thread Scott Reynolds
Luke, We practice the same immutable pattern on AWS. To decommission a broker, we use partition reassignment first to move the partitions off of the node and preferred leadership election. To do this with a web ui, so that you can handle it on lizard brain at 3 am, we have the Yahoo Kafka Manager

Controlled shutdown not relinquishing leadership of all partitions

2016-01-12 Thread Luke Steensen
Hello, We've run into a bit of a head-scratcher with a new kafka deployment and I'm curious if anyone has any ideas. A little bit of background: this deployment uses "immutable infrastructure" on AWS, so instead of configuring the host in-place, we stop the broker, tear down the instance, and rep

Re: Does quota requires 0.9.X clients?

2016-01-12 Thread Joel Koshy
I'm pretty sure it should work - you may want to give it a try locally though. We did add a throttle-time field in the responses but that will only be included in responses for requests from 0.9.x clients. 0.8.x requests will just get throttled at the broker and will get an 0.8.x format response wi

Does quota requires 0.9.X clients?

2016-01-12 Thread Allen Wang
>From looking at the design document, it seems quota is implemented purely at server side. So it should work with 0.8.X clients. But I would like to get confirmation. Thanks, Allen

Re: Consumer group disappears and consumers loops

2016-01-12 Thread Phillip Walker
I've done some work in isolating the problem a bit more, running the code normally instead of through my IDE, etc. KIP-41 should resolve part of the problem, but I found potential related issues. The UNKNOWN_MEMBER_ID error does occur, as expected, when a given thread fails to call poll() again wit

Re: Kafka Connect usage

2016-01-12 Thread Shiti Saxena
Hi, Thanks for replying soon. It would be nice if an error was thrown saying Kafka server not available. I kept looking at the code in workerSourceTask and couldn't understand what was wrong. Thanks, Shiti On 12 Jan 2016 23:54, "Liquan Pei" wrote: > Hi Shiti, > > You need to start Kafka server

Re: Stalling behaviour with 0.9 console consumer

2016-01-12 Thread Suyog Rao
Hi Gerard, I am not sure why min.fetch.bytes setting will cause a pause. Also, in 0.9 config, there is no min.fetch.bytes. The only bytes related setting is MAX_PARTITION_FETCH_BYTES_CONFIG. See: https://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/ConsumerConfig.html

Re: Kafka Connect usage

2016-01-12 Thread Liquan Pei
Hi Shiti, You need to start Kafka server and Zookeeper before running Kafka Connect. Thanks, Liquan On Tue, Jan 12, 2016 at 10:22 AM, Shiti Saxena wrote: > Hi Alex, > > I am using the default files. > > Do we need to start Kafka server and zookeeper separately before starting > Kafka connect?

Re: Kafka Connect usage

2016-01-12 Thread Shiti Saxena
Hi Alex, I am using the default files. Do we need to start Kafka server and zookeeper separately before starting Kafka connect? Thanks, Shiti On 12 Jan 2016 23:11, "Alex Loddengaard" wrote: > Hi Shiti, I'm not able to reproduce the problem with the default > *.properties files you're passing t

Re: Kafka Connect usage

2016-01-12 Thread Alex Loddengaard
Hi Shiti, I'm not able to reproduce the problem with the default *.properties files you're passing to connect-standalone.sh. Can you share these three files? Thanks, Alex On Mon, Jan 11, 2016 at 10:14 PM, Shiti Saxena wrote: > Hi, > > I tried executing the following, > > bin/connect-standalone

Re: flafka comsumer fetch size

2016-01-12 Thread Tim Williams
IIRC, just add it to your flume configs, eg. for a source: tier1.sources.src1.kafka.fetch.message.max.bytes= Thanks, --tim On Tue, Jan 12, 2016 at 7:25 AM, manish jaiswal wrote: > i m trying to read more than 1mb msg from kafka using flume > and i m getting fetch size error. > > where to define

Re: Stalling behaviour with 0.9 console consumer

2016-01-12 Thread Brice Dutheil
Hi Gerard, Why the fetch size should be correlated to the consumer stalling after x messages. One can st the fetch size on a cassandra query, and yet there's no "stalling", it's more or less just another "page". Cheers, -- Brice On Tue, Jan 12, 2016 at 12:10 PM, Gerard Klijs wrote: > Hi Suy

Re: best python library to use?

2016-01-12 Thread Andrew Otto
I’m not the maintainer of either pykafka or librdkafka, so I can’t realllL comment much on the benefit, but you may be right. However, librdkafka is well maintained and solid, so using it as the backing for a Python client gets you the benefit of not having to reinvent features yourself in Pyt

flafka comsumer fetch size

2016-01-12 Thread manish jaiswal
i m trying to read more than 1mb msg from kafka using flume and i m getting fetch size error. where to define in flume config for fetch.message.max.bytes. error: kafka.common.MessageSizeTooLargeException: Found a message larger than the maximum fetch size of this consumer on topic test partiti

Re: Memory records is not writable in MirrorMaker

2016-01-12 Thread Meghana Narasimhan
Hi, Came across a similar issue. We are running a 3 node cluster (kafka version 0.9) and Node 0 also has a few mirror makers running. When we do a rolling restart of the cluster, the mirror maker shuts down with the following errors. [2016-01-11 20:16:00,348] WARN Got error produce response with c

Re: Stalling behaviour with 0.9 console consumer

2016-01-12 Thread Gerard Klijs
Hi Suyog, It working as intended. You could set the property min.fetch.bytes to a small value to get less messages in each batch. Setting it to zero will probably mean you get one object with each batch, at least was the case when I tried, but I was producing and consuming at the same time. On Tue

Re: Kafka Is Featured on HPE Matter

2016-01-12 Thread Marko Bonaći
Wait, what "Kafka story"? The only Kafka-related sentence is: Kafka is an amazing tool for streaming data, so if your business strategy centers on streaming data, then it’s important. If not, it’s not worth your attention, no matter now trendy it is. You know, every "batch" was a stream when it wa