Re: Kafka metadata

2015-08-07 Thread Abdoulaye Diallo
@Rahul > If this is true, why does the producer API makes it necessary to supply a > value for metadata.broker.list? I believe this is because of the removal of ZK dependency(load balancing is no longer achieved with the use of ZK). For that purpose, 0.8 producer relies on the new cluster meta

Re: Kafka metadata

2015-08-07 Thread Rahul Jain
> > Alternatively you can get the same metadata from Zookeeper If this is true, why does the producer API makes it necessary to supply a value for metadata.broker.list? I noticed that this wasn't the case in 0.7. On 8 Aug 2015 04:06, "Lukas Steiblys" wrote: > Hi Qi, > > Yes, the metadata req

Re: Inconsistency with Zookeeper

2015-08-07 Thread Monika Garg
Please check and ensure that each machine can recognize each other. It might be the case zookeepers and kafka are running seperately on each machine, but together they are not part of the same cluster. On 08-Aug-2015 6:34 am, "Scott Clasen" wrote: > each zk needs a myid file in the data dir, wit

Re: Inconsistency with Zookeeper

2015-08-07 Thread Scott Clasen
each zk needs a myid file in the data dir, with a different number 1,2,3 http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html You can find the meanings of these and other configuration settings in the section Configuration Parameters

RE: Inconsistency with Zookeeper

2015-08-07 Thread Hemanth Abbina
Yes. I have set the unique broker ids (as 0, 1, 2) in the server.properties file I did not get the " set broker id in zookeeper data directory". We don't set any broker id in zookeeper. We provide only "zookeeper.connect=ZK:2181,ZK2:2181,ZK3:2181" in server.properties file. Right ? -Orig

Re: OffsetOutOfRangeError with Kafka-Spark streaming

2015-08-07 Thread Cassa L
That will be great if you can also try it! As for retention policy, I had come across some issue with 0.8.1 version where "retention.ms" is in milliseconds but actual server property is "log.retention.minutes" and servers would take it as minutes? Is it true? anyways, I have updated retention to 2

Re: Kafka metadata

2015-08-07 Thread Lukas Steiblys
Hi Qi, Yes, the metadata request will return information about all the brokers in the cluster. Alternatively you can get the same metadata from Zookeeper. Lukas -Original Message- From: Qi Xu Sent: Friday, August 7, 2015 2:30 PM To: users@kafka.apache.org Subject: Kafka metadata Hi

Kafka metadata

2015-08-07 Thread Qi Xu
Hi Everyone, I have a question that hopes to get some clarification. In a Kafka cluster, does every broker have the complete view of the metadata information? What's the best practice for aproducer to send metadata request? Is it recommended to send it to all brokers or just one broker? In our sce

Re: Suggestions when all replicas of a partition are dead?

2015-08-07 Thread Daniel Compton
I would have thought you'd want ZK up before Kafka started, but I don't have any strong data to back that up. On Sat, 8 Aug 2015 at 7:59 AM Steve Miller wrote: >So... we had an extensive recabling exercise, during which we had to > shut down and derack and rerack a whole Kafka cluster. Then

Suggestions when all replicas of a partition are dead?

2015-08-07 Thread Steve Miller
So... we had an extensive recabling exercise, during which we had to shut down and derack and rerack a whole Kafka cluster. Then when we brought it back up, we discovered the hard way that two hosts had their "rebuild on reboot" flag set in Cobbler. Everything on those hosts is gone as a

Re: AdminUtils addPartition, subsequent producer send exception

2015-08-07 Thread Gelinas, Chiara
I am not surprised to hear it is outside the norm. There is really one main use-case for this, and it’s for processing messaging in an ordered manner with http post delivery for a logic group of recipients. But actually, I have some questions about normal use because I do want to assess if the r

Re: AdminUtils addPartition, subsequent producer send exception

2015-08-07 Thread Grant Henke
Glad to help. I will say, as you probably got from my interest/questions, that is definitely outside of normal use (that I have seen). Why do you need dynamic logical partitioning? On Fri, Aug 7, 2015 at 1:20 PM, Gelinas, Chiara wrote: > Thank you! We are new to Kafka, so this makes complete sen

Re: AdminUtils addPartition, subsequent producer send exception

2015-08-07 Thread Gelinas, Chiara
Thank you! We are new to Kafka, so this makes complete sense - the metadata refresh age. Yes, incrementally assigning 1 partitioner key - we are tracking it in a relational DB along with offset, etc, for the consumers. We haven¹t yet implemented the dynamic consumer side but there are several app

Re: AdminUtils addPartition, subsequent producer send exception

2015-08-07 Thread Grant Henke
Interesting use case. I would be interested to hear more. Are you assigning 1 partition per key incrementally? How does your consumer know which partition has which key? I don't think there is a way to manually invalidate the cached metadata in the public producer api (I could be wrong), but the l

AdminUtils addPartition, subsequent producer send exception

2015-08-07 Thread Gelinas, Chiara
Hi All, We are looking to dynamically create partitions when we see a new piece of data that requires logical partitioning (so partitioning from a logical perspective rather than partitioning solely for load-based reasons). I noticed that when I create a partition via AdminUtils.addPartition, a

how to get single record from kafka topic+partition @ specified offset

2015-08-07 Thread Padgett, Ben
Does anyone have an example of how to get a single record from a topic+partition given a specific offset? I am interested in this for some retry logic for failed messages. Thanks!

Re: Inconsistency with Zookeeper

2015-08-07 Thread Prabhjot Bharaj
Have you set broker id in zookeeper data directory and some unique broker Id in server.properties ? Regards, Prabhjot On Aug 6, 2015 1:43 PM, "Hemanth Abbina" wrote: > Hi, > > I am running a Kafka POC with below details > > * 3 Node cluster (4 Core, 16 GB RAM each) running Kafka 0.8.2.1

Re: kafka log flush questions

2015-08-07 Thread Tao Feng
Thanks Ben for the detail explanation. -Tao On Fri, Aug 7, 2015 at 3:28 AM, Ben Stopford wrote: > Hi Tao > > 1. I am wondering if the fsync operation is called by the last two routines > internally? > => Yes > > 2. If log.flush.interval.ms is not specified, is it true that Kafka let OS > to hand

Re: Spooling support for kafka publishers !

2015-08-07 Thread Ben Stopford
Yes - that Jira needs completing, but I expect it is what you are looking for. You are welcome to pick it up if you wish. Otherwise I can pick it up. B > On 7 Aug 2015, at 10:52, sunil kalva wrote: > > -- Forwarded message -- > From: sunil kalva > Date: Fri, Aug 7, 2015 at

Re: kafka log flush questions

2015-08-07 Thread Ben Stopford
Hi Tao 1. I am wondering if the fsync operation is called by the last two routines internally? => Yes 2. If log.flush.interval.ms is not specified, is it true that Kafka let OS to handle pagecache flush in background? => Yes 3. If we specify ack=1 and ack=-1 in new producer, do those request onl

Fwd: Spooling support for kafka publishers !

2015-08-07 Thread sunil kalva
-- Forwarded message -- From: sunil kalva Date: Fri, Aug 7, 2015 at 2:12 PM Subject: Spooling support for kafka publishers ! To: d...@kafka.apache.org Hi What are the best practises to achieve spooling support on producer end if the kafka cluster is not reachable or degraded. W