Re: java.nio.channels.ClosedChannelException...Firewall Issue?

2015-01-19 Thread Su She
Hi Jaikiran, Thanks for the reply! 1) I started Kafka server on instance A by simply downloading Kafka_2.10-0.8.2-beta.tgz from the kafka website, and using the scripts mentioned here: http://kafka.apache.org/documentation.html#introduction. This is the same way I downloaded Kafka on B, except I

How to setup inter DC replication in Kafka 0.8.1.1

2015-01-19 Thread Madhukar Bharti
Hi, I want to setup inter DC replication b/w Kafka clusters. Is there any inbuilt tool to do this? I already have tried MirrorMaker tool but the problem is, if MM killed then some messages get duplicated. I don't want to duplicate the messages. Please suggest a way to do this. Please share your

Re: Backups

2015-01-19 Thread Gwen Shapira
Hi, As a former DBA, I hear you on backups :) Technically, you could copy all log.dir files somewhere safe occasionally. I'm pretty sure we don't guarantee the consistency or safety of this copy. You could find yourself with a corrupt "backup" by copying files that are either in the middle of get

Re: java.nio.channels.ClosedChannelException...Firewall Issue?

2015-01-19 Thread Jaikiran Pai
Hi Su, How exactly did you start the Kafka server on instance "A"? Are you sure the services on it are bound to non localhost IP? What does the following command result from instance B: telnet public.ip.of.A 9092 -Jaikiran On Tuesday 20 January 2015 07:16 AM, Su She wrote: Hello Everyone,

[VOTE] 0.8.2.0 Candidate 2

2015-01-19 Thread Jun Rao
This is the second candidate for release of Apache Kafka 0.8.2.0. There has been some changes since the 0.8.2 beta release, especially in the new java producer api and jmx mbean names. It would be great if people can test this out thoroughly. Release Notes for the 0.8.2.0 release *https://people.a

Re: Kafka Out of Memory error

2015-01-19 Thread Gwen Shapira
Two things: 1. The OOM happened on the consumer, right? So the memory that matters is the RAM on the consumer machine, not on the Kafka cluster nodes. 2. If the consumers belong to the same consumer group, each will consume a subset of the partitions and will only need to allocate memory for those

Re: Number of Consumers Connected

2015-01-19 Thread Guozhang Wang
There is a property config you can set via bin/kafka-console-consumer.sh to commit offsets to ZK, you can use bin/kafka-console-consumer.sh --help to list all the properties. Guozhang On Mon, Jan 19, 2015 at 5:15 PM, Sa Li wrote: > Guozhang, > > Currently we are in the stage to testing producer

java.nio.channels.ClosedChannelException...Firewall Issue?

2015-01-19 Thread Su She
Hello Everyone, Thank you for the help! Preface: I've created producers/consumers before and they have worked. I have also made consumers/producers using java programs, but they have all been locally. 1) I have a Zookeeper/Kafka Server running on an EC2 instance called "A" 2) I started the Zook

Re: Number of Consumers Connected

2015-01-19 Thread Sa Li
Guozhang, Currently we are in the stage to testing producer, our C# producer sending data to brokers, and use bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance command to produce the messages. We don't have a coded consumer to commit offset, we use bin/kafka-console-consum

Re: Command to list my brokers

2015-01-19 Thread Guozhang Wang
Dillian, Currently we do not have a script tool to list / verify all the brokers directly. The best practice would be checking the /brokers/ids path on ZK. This situation could be improved though, could you file a JIRA for adding admin tool on listing / verifying online brokers? Guozhang On Sat,

Re: In Apache Kafka, how can one achieve delay queue support (similar to what ActiveMQ has)?

2015-01-19 Thread Guozhang Wang
Vish, I am assuming by "delay queue support" you mean sth. like: http://activemq.apache.org/delay-and-schedule-message-delivery.html Kafka uses a client-pull based consumption model, i.e. the consumers will determine when to fetch the next message after it has, for example, waited for some time

Re: kafka shutdown automatically

2015-01-19 Thread Guozhang Wang
Yonghui, which version of Kafka are you using? And does your cluster only have one (broker-0) server? Guozhang On Sat, Jan 17, 2015 at 11:53 PM, Yonghui Zhao wrote: > Hi, > > our kafka cluster is shut down automatically today, here is the log. > > I don't find any error log. Anything wrong? >

Re: Number of Consumers Connected

2015-01-19 Thread Guozhang Wang
Sa, Did your consumer ever commit offsets to Kafka? If not then no corresponding ZK path will be created. Guozhang On Mon, Jan 19, 2015 at 3:58 PM, Sa Li wrote: > Hi, > > I use such tool > > Consumer Offset Checker > > Displays the: Consumer Group, Topic, Partitions, Offset, logSize, Lag, > O

Re: can't iterate consumed messages when checking errorCode first

2015-01-19 Thread Manu Zhang
Thanks Jun. I don't see any error code and the fetch size is large enough to than the largest single message. Actually, when I call response.messageSet(topic, partition).toBuffer.size the value is the number of messages I've produced to Kafka. On Tue Jan 20 2015 at 上午12:31:53 Jun Rao wrote: > Di

Re: Number of Consumers Connected

2015-01-19 Thread Sa Li
Hi, I use such tool Consumer Offset Checker Displays the: Consumer Group, Topic, Partitions, Offset, logSize, Lag, Owner for the specified set of Topics and Consumer Group bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker To be able to know the consumer group, in zkCli.sh [zk: localhos

Re: Kafka Out of Memory error

2015-01-19 Thread Pranay Agarwal
Thanks a lot Natty. I am using this Ruby gem on the client side with all the default config https://github.com/joekiller/jruby-kafka/blob/master/lib/jruby-kafka/group.rb and the value fetch.message.max.bytes is set to 1MB. Currently I only have 3 nodes setup in the Kafka cluster (with 8 GB RAM) a

Re: Kafka Out of Memory error

2015-01-19 Thread Jonathan Natkins
The fetch.message.max.size is actually a client-side configuration. With regard to increasing the number of threads, I think the calculation may be a little more subtle than what you're proposing, and frankly, it's unlikely that your servers can handle allocating 200MB x 1000 threads = 200GB of mem

Re: Jar files needed to run in Java environment (without Maven)

2015-01-19 Thread Jonathan Natkins
Hi Suhas, Without seeing the actual output of the stacktrace, I'd suspect that spark-submit is doing some classpath magic that is covering some dependencies you may not have included. Depending on your use case, it might be easier to deal with this by just having maven output a pre-built jar-with-

Re: Kafka Out of Memory error

2015-01-19 Thread Pranay Agarwal
Thanks Natty. Is there any config which I need to change on the client side as well? Also, currently I am trying with only 1 consumer thread. Does the equation changes to (#partitions)*(fetchsize)*(#consumer_threads) in case I try to read with 1000 threads from from topic2(1000 partitions)? -Pran

Re: Kafka Out of Memory error

2015-01-19 Thread Jonathan Natkins
Hi Pranay, I think the JIRA you're referencing is a bit orthogonal to the OOME that you're experiencing. Based on the stacktrace, it looks like your OOME is coming from a consumer request, which is attempting to allocate 200MB. There was a thread (relatively recently) that discussed what I think i

[VOTE CANCELLED] 0.8.2.0 Candidate 1

2015-01-19 Thread Jun Rao
Thanks for reporting the issues in RC1. I will prepare RC2 and start a new vote. Jun On Tue, Jan 13, 2015 at 7:16 PM, Jun Rao wrote: > This is the first candidate for release of Apache Kafka 0.8.2.0. There > has been some changes since the 0.8.2 beta release, especially in the new > java produc

Kafka Out of Memory error

2015-01-19 Thread Pranay Agarwal
Hi All, I have a kafka cluster setup which has 2 topics topic1 with 10 partitions topic2 with 1000 partitions. While, I am able to consume messages from topic1 just fine, I get following error from the topic2. There is a resolved issue here on the same thing https://issues.apache.org/jira/browse

Join

2015-01-19 Thread Pranay Agarwal
Please subscribe myself.

Re: kafka-web-console error

2015-01-19 Thread Sa Li
Continue this kafka-web-console thread, I follow such page: http://mungeol-heo.blogspot.ca/2014/12/kafka-web-console.html I run the command: play "start -Dhttp.port=8080" It works good for a while, but getting such error : at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.j

Re: connection error among nodes

2015-01-19 Thread Sa Li
Hello, Jun I run such command: bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance test-rep-three 100 3000 -1 acks=-1 bootstrap.servers=10.100.98.100:9092, 10.100.98.101:9092 , 10.100.98.102:9092 buffer.memory=6710

Re: Issue size message

2015-01-19 Thread Eduardo Costa Alfaia
Hi guys, Ok, I’ve proved this and it was fine. Thanks > On Jan 19, 2015, at 19:10, Joe Stein wrote: > > If you increase the size of the messages for producing then you **MUST** also > change *replica.fetch.max.bytes i*n the broker* server.properties *otherwise > none of your replicas will be a

Re: Issue size message

2015-01-19 Thread Joe Stein
If you increase the size of the messages for producing then you **MUST** also change *replica.fetch.max.bytes i*n the broker* server.properties *otherwise none of your replicas will be able to fetch from the leader and they will all fall out of the ISR. You also then need to change your consumers *

Re: Issue size message

2015-01-19 Thread Magnus Edenhill
(duplicating the github answer for reference) Hi Eduardo, the default maximum fetch size is 1 Meg which means your 2 Meg messages will not fit the fetch request. Try increasing it by appending -X fetch.message.max.bytes=400 to your command line. Regards, Magnus 2015-01-19 17:52 GMT+01:00 E

Jar files needed to run in Java environment (without Maven)

2015-01-19 Thread Su She
Hello All, I posted a question last week ( http://mail-archives.apache.org/mod_mbox/kafka-users/201501.mbox/browser) but I'm not able to respond to the thread for some reason? The suggestion on the last post was that I am missing some jar files in my classpath, but I am using the following jar fi

Re: dumping JMX data

2015-01-19 Thread Jaikiran Pai
Hi Scott, A quick look at the JmxTool code suggests that it probably isn't able to find the attribute for that MBean, although that MBean does seem to have 1 attribute named Value (I used jconsole to check that). The output you are seeing is merely the date (without any format) being printed o

Issue size message

2015-01-19 Thread Eduardo Costa Alfaia
Hi All, I am having an issue when using kafka with librdkafka. I've changed the message.max.bytes to 2MB in my server.properties config file, that is the size of my message, when I run the command line ./rdkafka_performance -C -t test -p 0 -b computer49:9092, after consume some messages the cons

Re: can't iterate consumed messages when checking errorCode first

2015-01-19 Thread Jun Rao
Did you get any error code in the response? Also, make sure fetchSize is larger than the largest single message. Thanks, Jun On Sun, Jan 18, 2015 at 4:54 PM, Manu Zhang wrote: > Hi all, > > I'm using Kafka low level consumer api and find in the below codes > "iterator.hasNext" always return fa

Re: latency - how to reduce?

2015-01-19 Thread Shannon Lloyd
This is a good point, even though you mentioned that you also have latency issues locally. I just migrated a 3-node test cluster from m3.large instances to c4.xlarge instances (3-node ZK migrated from m3.medium to c4.large) in an EC2 placement group (better network IO and more consistent latencies)