Failed to parse the broker info from zookeeeper

2016-08-08 Thread Sven Ott
Hello everyone,   I downloaded the latest Kafka verison and tried to run the quick start. Unfortunately, I cannot open a topic after starting the zookeeper and kafka server as described in step 2. The error  Failed to parse the broker info from zookeeeper: {"jmx_port":-1, "timestamp":"...", "e

Re: Failed to parse the broker info from zookeeeper

2016-08-08 Thread Radoslaw Gruchalski
Have you copy/paste this json or typed it? {"jmx_port":-1, "timestamp":"...", "endpoints":["PLAINSTEXT://sven:9092]","host":"sven","version":3,"port":9092} There are 2 errors in this json: ["PLAINSTEXT://sven:9092]” <- quotes don’t match and the protocol is PLAINTEXT not PLAINSTEXT. – Best regar

Aw: Re: Failed to parse the broker info from zookeeeper

2016-08-08 Thread Sven Ott
I typed it, because I dont have internet access on the machine where I am trying to run Kafka. Sorry, for the mistakes.   Gesendet: Montag, 08. August 2016 um 09:46 Uhr Von: "Radoslaw Gruchalski" An: "Sven Ott" , users@kafka.apache.org Betreff: Re: Failed to parse the broker info from zookeeep

Re: Failed to parse the broker info from zookeeeper

2016-08-08 Thread Jaikiran Pai
The quickstart step 2 has a couple of commands, which exact command shows this exception and is there more in the exception, like an exception stacktrace? Can you post that somehow? -Jaikiran On Monday 08 August 2016 12:46 PM, Sven Ott wrote: Hello everyone, I downloaded the latest Kafka ve

Aw: Re: Failed to parse the broker info from zookeeeper

2016-08-08 Thread Sven Ott
It fails in step 3:  bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test Error while executing topic command : Failed to parse the broker info from zookeeper: {"jmx_port":,"timestamp":"1470650706921","endpoints":["PLAINTEXT://sven:9092

Re: Failed to parse the broker info from zookeeeper

2016-08-08 Thread Radoslaw Gruchalski
The exception is: Caused by: kafka.common.KafkaException: Unable to parse PLAINTEXT://sven:9092 to a broker endpoint  And it happens here:https://github.com/apache/kafka/blob/0.10.0/core/src/main/scala/kafka/cluster/EndPoint.scala#L47 Do you have any non-ASCII characters in your URI? Something in

Re: Re: Failed to parse the broker info from zookeeeper

2016-08-08 Thread Sven Ott
Ah I got it. A underscore is not allowed and the host address contained one (I did't posed the full name here).   Thanks for the quick help. Best, Sven Gesendet: Montag, 08. August 2016 um 12:46 Uhr Von: "Radoslaw Gruchalski" An: users@kafka.apache.org, users@kafka.apache.org Cc: users@kafka.ap

Re: Log compaction leaves empty files?

2016-08-08 Thread Harald Kirsch
Hi Dustin, thanks for the reply. to be honest, I am trying to work around https://issues.apache.org/jira/browse/KAFKA-1194. I implemented a small main() to call the LogCleaner for compaction and it seemed to work, but left the zero-byte files. The idea is to stop the broker, then run the com

RE: Kafka consumer getting duplicate message

2016-08-08 Thread Ghosh, Achintya (Contractor)
Thank you , Ewen for your response. Actually we are using 1.0.0.M2 Spring Kafka release that uses Kafka 0.9 release. Yes, we see a lot of duplicates and here is our producer and consumer settings in application. We don't see any duplicacy at Producer end I mean if we send 1000 messages to a parti

[RESULTS] [VOTE] Release Kafka 0.10.0.1

2016-08-08 Thread Ismael Juma
The vote for the Kafka 0.10.0.1 release passed with 14 +1 votes (4 binding) and no 0 or -1 votes. +1 votes PMC Members: * Neha Narkhede * Guozhang Wang * Jun Rao * Joel Koshy Committers: * Gwen Shapira * Harsha Chintalapani * Ismael Juma Community: * Jim Jagielski * Tom Crayford * Dana Powers *

Re: [kafka-clients] [VOTE] 0.10.0.1 RC2

2016-08-08 Thread Ismael Juma
+1 (non-binding), verified checksums and signature, ran tests on source package and verified quickstart on source and binary packages. On Mon, Aug 8, 2016 at 2:17 AM, Harsha Chintalapani wrote: > +1 (binding) > 1. Ran 3 node cluser > 2. Ran few tests in creating, producing , consuming from secur

Strange behavior when turn the system clock back.

2016-08-08 Thread Gabriel Ibarra
I am dealing with an issue when turn the system clock back (either due to NTP or administrator action). I'm using kafka_2.11-0.10.0.0 I follow the next steps. - Start a consumer for TOPIC_NAME with group id GROUP_NAME. It will be owner of all the partitions. - Turn the system clock back. For insta

Re: Testing broker failover

2016-08-08 Thread Alex Loddengaard
Hi Alper, can you share your producer config -- the Properties object? We need to learn more to help you understand the behavior you're observing. Thanks, Alex On Fri, Aug 5, 2016 at 7:45 PM, Alper Akture wrote: > I'm using 0.10.0.0 and testing some failover scenarios. For dev, i have > single

Re: Testing broker failover

2016-08-08 Thread Alper Akture
Thanks Alex... using producer props: {timeout.ms=500, max.block.ms=500, request.timeout.ms=500, bootstrap.servers=localhost:9092, serializer.class=kafka.serializer.StringEncoder, value.serializer=org.apache.kafka.common.serialization.StringSerializer, metadata.fetch.timeout.ms=500, key.serializer=

Re: Testing broker failover

2016-08-08 Thread Alper Akture
I also notice that if I use the send method that takes a callback: public Future send(ProducerRecord record, Callback callback) { call that in the onCompletionalso all On Mon, Aug 8, 2016 at 11:24 AM, Alper Akture wrote: > Thanks Alex... using producer props: > > {timeout.ms=500, max.block.ms=50

Re: Testing broker failover

2016-08-08 Thread Alex Loddengaard
Hi Alper, Thanks for sharing. I was particularly interested in seeing what *acks* was set to. Since you haven't set it, its value is the default, *1*. To handle errors, you need to use the send() method that takes a callback, and build an appropriate callback to handle errors. Take a look here fo

Re: Testing broker failover

2016-08-08 Thread Alper Akture
(continued from previous email) Hit send too soon... but I also notice that if I use the producer send method that takes a callback, in the onCompletion method of the callback, I do see an exception is sent in the callback: org.apache.kafka.common.errors.TimeoutException: Batch containing 2 reco

Re: Testing broker failover

2016-08-08 Thread Alper Akture
Got, it thanks, exactly what I had just noticed and sent a reply as you were replying... thanks Alex. On Mon, Aug 8, 2016 at 11:35 AM, Alex Loddengaard wrote: > Hi Alper, > > Thanks for sharing. I was particularly interested in seeing what *acks* was > set to. Since you haven't set it, its value

kafka jdbc connector error

2016-08-08 Thread Imran Akbar
Hi, I'm trying to use the JDBC connector following these instructions, but am receiving this error: [2016-08-08 18:55:51,958] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:100) java.lang.

Large # of Topics/Partitions

2016-08-08 Thread Daniel Fagnan
Hey all, I’m currently in the process of designing a system around Kafka and I’m wondering the recommended way to manage topics. Each event stream we have needs to be isolated from each other. A failure from one should not affect another event stream from processing (by failure, we mean a downs

Re: Large # of Topics/Partitions

2016-08-08 Thread Tom Crayford
Hi Daniel, Kafka doesn't provide this kind of isolation or scalability for many many streams. The usual design is to use a consistent hash of some "key" to attribute your data to a particular partition. That of course, doesn't isolate things fully, but has everything in a partition dependent on ea

Re: Large # of Topics/Partitions

2016-08-08 Thread Daniel Fagnan
Thanks Tom! This was very helpful and I’ll explore having a more static set of partitions as that seems to fit Kafka a lot better. Cheers, Daniel > On Aug 8, 2016, at 12:27 PM, Tom Crayford wrote: > > Hi Daniel, > > Kafka doesn't provide this kind of isolation or scalability for many many > s

Kafka ACLs CLI Auth Error

2016-08-08 Thread Derar Alassi
Hi all, I have 3-node ZK and Kafka clusters. I have secured ZK with SASL. I got the keytabs done for my brokers and they can connect to the ZK ensemble just fine with no issues. All gravy! Now, I am trying to set ACLs using the kafka-acls.sh CLI. Before that, I did export the KAFKA_OPTS using th

Re: Kafka cluster with a different version that the java API

2016-08-08 Thread Sergio Gonzalez
Perfect, thank you so much Alex On Fri, Aug 5, 2016 at 6:03 PM, Alex Loddengaard wrote: > Hi Sergio, clients have to be the same version or older than the brokers. A > newer client won't work with an older broker. > > Alex > > On Fri, Aug 5, 2016 at 7:37 AM, Sergio Gonzalez < > sgonza...@cecrop

How to Identify Consumers of a Topic?

2016-08-08 Thread Jillian Cocklin
Hello, Our team is using Kafka for the first time and are in the testing phase of getting a new product ready, which uses Kafka as the communications backbone. Basically, a processing unit will consume a message from a topic, do the processing, then produce the output to another topic. Messag

Re: How to Identify Consumers of a Topic?

2016-08-08 Thread Derar Alassi
I use kafka-consumer-offset-checker.sh to check offsets of consumers and along that you get which consumer is attached to each partition. On Mon, Aug 8, 2016 at 3:12 PM, Jillian Cocklin < jillian.cock...@danalinc.com> wrote: > Hello, > > Our team is using Kafka for the first time and are in the t

RE: How to Identify Consumers of a Topic?

2016-08-08 Thread Jillian Cocklin
Thanks Derar, I'll check that out and see if it gives enough information about the consumer to track it. Thanks! Jillian -Original Message- From: Derar Alassi [mailto:derar.ala...@gmail.com] Sent: Monday, August 08, 2016 3:35 PM To: users@kafka.apache.org Subject: Re: How to Identify C

Re: Kafka ACLs CLI Auth Error

2016-08-08 Thread BigData dev
Hi, I think jaas config file need to be changed. Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.service.keytab" storeKey=true useTicketCache=false serviceName="zookeeper" principal="kafka/hostname.abc@abc.c

keeping a high value for zookeeper.connection.timeout.ms

2016-08-08 Thread Digumarthi, Prabhakar Venkata Surya
Hi All, Right now we are running kafka in AWS EC2 servers and zookeeper is also running on separate EC2 instances. We have created a service (system units ) for kafka and zookeeper to make sure that they are started in case the server gets rebooted. The problem is sometimes zookeeper severs

kafka-consumer-groups.sh fail with sasl enabled 0.9.0.1

2016-08-08 Thread Prabhu V
I am using a kafka consumer where the partitions are assigned using maually instead of automatic group assignment using a code similar to "consumer. assign();" In this case bin/kafka-consumer-groups fails with the message "Consumer group `my_group1` does not exist or is rebalancing" On debugging