Re: Hung Kafka Threads?

2015-04-13 Thread Ewen Cheslack-Postava
"Parking to wait for" just means the thread has been put to sleep while waiting for some synchronized resource. In this case, "ConditionObject" indicates it's probably await()ing on a condition variable. This almost always means that thread is just waiting for notification from another thread that

Re: serveral questions about auto.offset.reset

2015-04-13 Thread Ewen Cheslack-Postava
On Mon, Apr 13, 2015 at 10:10 PM, bit1...@163.com wrote: > Hi, Kafka experts: > > I got serveral questions about auto.offset.reset. This configuration > parameter governs how consumer read the message from Kafka when there is > no initial offset in ZooKeeper or if an offset is out of range. > >

serveral questions about auto.offset.reset

2015-04-13 Thread bit1...@163.com
Hi, Kafka experts: I got serveral questions about auto.offset.reset. This configuration parameter governs how consumer read the message from Kafka when there is no initial offset in ZooKeeper or if an offset is out of range. Q1. "no initial offset in zookeeper " means that there isn't any co

Hung Kafka Threads?

2015-04-13 Thread Sharma, Prashant
Kafka version is 0.8.1.1. On taking a thread dump against one of our servers in Kafka Cluster, i see lot of threads with message below: "SOMEID-9" id=67784 idx=0x75c tid=24485 prio=5 alive, parked, native_blocked, daemon -- Parking to wait for: java/util/concurrent/locks/AbstractQueuedSynch

Re: Please remove me from this mailing list - thx

2015-04-13 Thread Guozhang Wang
The mailing list is self-service, you can find ways to unsubscribe here http://kafka.apache.org/contact.html On Mon, Apr 13, 2015 at 12:16 PM, Orelowitz, David wrote: > > > -- > This message, and any attachments, is for the inte

Re: java consumer client sometimes work,but sometimes not.

2015-04-13 Thread Guozhang Wang
BTW it seems you are referring to the producer not the consumer as claimed in the title. On Mon, Apr 13, 2015 at 5:54 PM, François Méthot wrote: > It could be that the maximum number of connections was reached for client > IP you are using. The defaut is 10. But it can be changed. I had simila

Re: java consumer client sometimes work,but sometimes not.

2015-04-13 Thread François Méthot
It could be that the maximum number of connections was reached for client IP you are using. The defaut is 10. But it can be changed. I had similar intermittents issue because of that. The property is max.connections. per.ip Le 2015-04-12 11:20 PM, "kaybin wong" a écrit : > > hi there. > i got

Re: Producer does not recognize new brokers

2015-04-13 Thread Chi Hoang
I highly recommend https://github.com/airbnb/kafkat, which will simplify your partition management tasks. Use it with https://github.com/airbnb/kafkat/pull/3 for partition specific reassignment. Chi On Mon, Apr 13, 2015 at 4:08 AM, Jan Filipiak wrote: > Hey, > > try to not have newlines \n in

Re: Kafka server relocation

2015-04-13 Thread tao xiao
how about the consumer lag of mirror maker? On Mon, Apr 13, 2015 at 1:33 PM, nitin sharma wrote: > i just tested that too and below is the stats.. it is clear that with > "kafka-consumer-perf-test.sh", i am able to get a high throughput. around > 44.0213 MB/sec. > > Seriously some configuration

Re: Kafka server relocation

2015-04-13 Thread nitin sharma
i just tested that too and below is the stats.. it is clear that with "kafka-consumer-perf-test.sh", i am able to get a high throughput. around 44.0213 MB/sec. Seriously some configuration needs to be tweaked in MirrorMaker configuration for speedy processing... Can you think of something ? star

Re: Consumer offsets in offsets topic 0.8.2

2015-04-13 Thread Jiangjie Qin
Yeah, the current ConsumerOffsetChecker has this issue (maybe bug also) if the offset storage is Kafka and no offset has been committed. It will throw ZK exception, which is very confusing. KAFKA-1951 was opened for this but was not checked in. Thanks. Jiangjie (Becket) Qin On 4/13/15, 9:55 AM,

Re: Kafka server relocation

2015-04-13 Thread tao xiao
num.consumer.fetchers means the max number of fetcher threads that can be spawned. it doesn't necessarily mean you can get as many fetcher threads as you specify. To me the metrics are suggesting a very slow consumption rate only 18.21 bytes/minute. Here is the benchmark Linkedin does http://engi

Re: Kafka server relocation

2015-04-13 Thread nitin sharma
hi Xiao, i have finally got JMX monitoring enabled for my kafka nodes in test envrionment and here is what i observed. i was monitoring mbeans under "kafka.consumer" domain of JVM running "Kafka Mirror Maker" process. = AllTopicsBytes ===> 18.21 bytes/minute FetchRequestRa

Re: Topic to broker assignment

2015-04-13 Thread Gwen Shapira
Here's the algorithm as described in AdminUtils.scala: /** * There are 2 goals of replica assignment: * 1. Spread the replicas evenly among brokers. * 2. For partitions assigned to a particular broker, their other replicas are spread over the other brokers. * * To achieve this goa

Re: Topic to broker assignment

2015-04-13 Thread Bill Hastings
Thanks for the reference. But it doesn't seem to cover how a particular topic is assigned to a particular broker and also how the replicas are chosen. For eg if I have brokers A B C D and E what algorithm is used to assign a topic X and partition 1 to brokers B D and E if the chosen replication fac

Please remove me from this mailing list - thx

2015-04-13 Thread Orelowitz, David
-- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.ban

Re: Topic to broker assignment

2015-04-13 Thread Jiangjie Qin
A quick reference. http://www.slideshare.net/junrao/kafka-replication-apachecon2013 On 4/12/15, 11:36 PM, "Bill Hastings" wrote: >Hi Guys > >How do topics get assigned to brokers? I mean if I were to create a topic >X >and publish to it how does Kafka assign the topic and the message to a >part

Re: Consumer offsets in offsets topic 0.8.2

2015-04-13 Thread Madhukar Bharti
Hi Vamsi, You can also see the example here if you want to use Java API to get the offset from topic. Regards, Madhukar On Mon, Apr 13, 2015 at 10:25 PM, 4mayank <4may...@gmail.com> wr

Re: Consumer offsets in offsets topic 0.8.2

2015-04-13 Thread 4mayank
I did a similar change - moved from High Level Consumer to Simple Consumer. Howerver kafka-consumer-offset-checker.sh throws an exception. Its searching the zk path /consumers// which does not exist on any of my zk nodes. Is there any other tool for getting the offset lag when using Simple Consume

Re: Producer does not recognize new brokers

2015-04-13 Thread Jan Filipiak
Hey, try to not have newlines \n in your jsonfile. I think the parser dies on those and then claims the file is empty Best Jan On 13.04.2015 12:06, Ashutosh Kumar wrote: Probably you should first try to generate proposed plan using --generate option and then edit that if needed. thanks

Re: Producer does not recognize new brokers

2015-04-13 Thread Ashutosh Kumar
Probably you should first try to generate proposed plan using --generate option and then edit that if needed. thanks On Mon, Apr 13, 2015 at 3:12 PM, shadyxu wrote: > Thanks guys. You are right and then here comes another problems: > > I added new brokers 4, 5 and 6. Now I want to move partitio

Re: Producer does not recognize new brokers

2015-04-13 Thread shadyxu
Thanks guys. You are right and then here comes another problems: I added new brokers 4, 5 and 6. Now I want to move partitions 3, 4 and 5(currently on broker 1, 2 and 3) of topic test to these brokers. I wrote r.json file like this: {"partitions": [{"topic": "test","partition": 3,"replicas": [4]}

Re: Some queries about java api for kafka producer

2015-04-13 Thread dhiraj prajapati
Thanks a lot On 13 Apr 2015 06:08, "Manoj Khangaonkar" wrote: > Clarification. My answer applies to the new producer API in 0.8.2. > > regards > > On Sun, Apr 12, 2015 at 4:00 PM, Manoj Khangaonkar > wrote: > > > Hi, > > > > For (1) from the java docs "The producer is *thread safe* and should >