Re: Apache Kafka HTTP Producer & Consumer

2014-01-16 Thread Joe Stein
We can create another producer type under something like /producer/ for that. I opened an issue https://github.com/stealthly/dropwizard-kafka-http/issues/3 feel free to talk more about it there. We are going to end up with a few dozen api calls (i.e. /tools/) I suspect. The Vagrantfile is helpf

RE: Question about missing broker data in zookeeper

2014-01-16 Thread Jagbir Hooda
Thanks for your help Guozhang. I can get the data. 8< get /brokers/ids/3 { "host":"mongodb3", "jmx_port":-1, "port":9093, "timestamp":"1389924263116", "version":1 } cZxid = 0x1000f ctime = Thu Jan 16 18:04:23 PST 2014 mZxid = 0x1000f mtime = Thu Jan 16 18:0

Re: log.retention.bytes.per.topic does not trigger deletion

2014-01-16 Thread Ben Summer
I see. I don't have version 0.8.1 yet. We just updated to 0.8.0 from beta after it became the "stable version". Good to know there is a fix for this. I'll start trying it out in some non-production environments. Thanks, Ben On Thu, Jan 16, 2014 at 7:42 PM, Guozhang Wang wrote: > In the latest

Re: Apache Kafka HTTP Producer & Consumer

2014-01-16 Thread Marc Labbe
I was looking to do something like this too. A few questions maybe. I don't know what will be the use of this service so let me know if my questions are outside your scope. Isn't it a problem to create a producer instance on every request? Passing messages in the URL would be problem for me, why n

Re: Question about missing broker data in zookeeper

2014-01-16 Thread Guozhang Wang
To get the broker registration data you need get /brokers/ids/1 (ls /brokers/ids/1 will only retrieve its children, which is null) Guozhang On Thu, Jan 16, 2014 at 6:36 PM, Jagbir Hooda wrote: > Hi, > > I've a setup of three kafka servers (kafka_2.8.0-0.8.0) and three > zookeeper servers (zo

Re: log.retention.bytes.per.topic does not trigger deletion

2014-01-16 Thread Guozhang Wang
In the latest version, per-topic configs have been moved to Zookeeper you and set them using admin tools instead of writing the config files. Could you try trunk HEAD and see if this issue has already been resolved: https://issues.apache.org/jira/browse/KAFKA-554 Guozhang On Thu, Jan 16, 2014 a

Question about missing broker data in zookeeper

2014-01-16 Thread Jagbir Hooda
Hi, I've a setup of three kafka servers (kafka_2.8.0-0.8.0) and three zookeeper servers (zookeeper1, zookeeper2, zookeeper3). Everything works OK, but when I did a consumer test using nodejs package node-kafka it failed to retrieve any messages. When I looked more closely I found something int

Re: log.retention.bytes.per.topic does not trigger deletion

2014-01-16 Thread Ben Summer
v0.8.0, (non-beta) downloaded from the website. Let me know if there is a file I can check in the installation footprint that can give you an actual change id. Is there a fix to this particular feature that you know was made in a later build? Thanks for the quick response, Ben On Thu, Jan 16, 2

RE: python and kafka - how to use as a queue

2014-01-16 Thread Jagbir Hooda
Hi Arthur, I'm running into a very similar issue even with the latest version ( kafka-python @ V. 0.8.1_1 used with kafka_2.8.0-0.8.0.tar.gz). I have created a topic 'my-topic' with two partitions and 1-replication (across a set of 3 kafka brokers). I've published 100 messages to the topic (see

Re: log.retention.bytes.per.topic does not trigger deletion

2014-01-16 Thread Guozhang Wang
Hello Ben, Which version are you using? Guozhang On Thu, Jan 16, 2014 at 3:15 PM, Ben Summer wrote: > I tried using the following two retention properties > > log.retention.bytes=3221225472 > > log.retention.bytes.per.topic=first_topic:1099511627776,second_topic:1099511627776 > > which I inte

log.retention.bytes.per.topic does not trigger deletion

2014-01-16 Thread Ben Summer
I tried using the following two retention properties log.retention.bytes=3221225472 log.retention.bytes.per.topic=first_topic:1099511627776,second_topic:1099511627776 which I interpret to mean "by default, keep 3GB per topic partition, except for first_topic and second_topic, which should retain

Apache Kafka HTTP Producer & Consumer

2014-01-16 Thread Joe Stein
http://github.com/stealthly/dropwizard-kafka-http is an Apache Kafka HTTP Endpoint http://allthingshadoop.com/2014/01/16/apache-kafka-http-producer-and-consumerfor producing and consuming messages for topics. We have more updates planned coming out of client work that is in progress along with the

Re: Mirroring datacenters without vpn

2014-01-16 Thread Andrey Yegorov
Thank you. Reference to KAFKA-1092 is very useful. Unfortunately it is not a part of release version 0.8 but I hope 0.8.1 will see the light soon enough. -- Andrey Yegorov On Fri, Jan 10, 2014 at 5:08 PM, Joel Koshy wrote: > > > > Ops proposed to set up mirror to work over open intern

Re: compile error sbt / log4j , cannot exclude log4j

2014-01-16 Thread Joe Stein
"org.apache.kafka" % "kafka_2.10" % "0.8.0" intransitive(), "log4j" % "log4j" % "1.2.17", /*** Joe Stein Founder, Principal Consultant Big Data Open Source Security LLC http://www.stealth.ly Twitter: @allthingshadoop

Re: compile error sbt / log4j , cannot exclude log4j

2014-01-16 Thread Ran RanUser
Thanks, but unfortunately same error. I see in kafka.utils.LoggingClass the import: import org.apache.log4j.Logger Is it possible for this class to reference org.slf4j.* instead users of Kafka can use their own slf4j compatible logging lib? On Thu, Jan 16, 2014 at 7:09 AM, Joe Stein wrote:

Re: Consumer starts to consume after changing group id

2014-01-16 Thread Gürkan Oluç
Hello Jun, Thanks for your answer. I agree with you, I think problem is Storm 0.9 Spout. Thanks, Gurkan On Thu, Jan 16, 2014 at 5:49 PM, Jun Rao wrote: > If the Kafka consumption part is working, you would have to look into Storm > to see why the messages are dropped. > > Thanks, > > Jun > >

Re: High level consumer does not consumes in 0.8.0 version

2014-01-16 Thread Jun Rao
0.8.0 should be the most stable version. Also, try starting with a new consumer group and see if this works. The old group may already have offsets committed to ZK. Thanks, Jun On Thu, Jan 16, 2014 at 7:57 AM, Hussain Pirosha < hussain.piro...@impetus.co.in> wrote: > Hi Jun, > > I tried that t

RE: High level consumer does not consumes in 0.8.0 version

2014-01-16 Thread Hussain Pirosha
Hi Jun, I tried that too but it didn't worked. What version of kafka is currently best suitable for production ? Thanks, Hussain -Original Message- From: Jun Rao [mailto:jun...@gmail.com] Sent: Thursday, January 16, 2014 9:24 PM To: users@kafka.apache.org Subject: Re: High level consume

Re: High level consumer does not consumes in 0.8.0 version

2014-01-16 Thread Jun Rao
By default, a new consumer consumes from the end of the queue, ie. only newly produced messages will be consumed. Try producing some more messages after the consumer is up or setting the consumer property to consume from the beginning. You shouldn't need to commit offset on every message, which is

Re: Consumer starts to consume after changing group id

2014-01-16 Thread Jun Rao
If the Kafka consumption part is working, you would have to look into Storm to see why the messages are dropped. Thanks, Jun On Wed, Jan 15, 2014 at 11:32 PM, Gürkan Oluç wrote: > Hello, > > It looks like it consumes from Kafka. here is the request log : > https://gist.github.com/gurkanoluc/3

Re: How to force producer and high level consumer to use different Ethernet cards on the kafka broker node

2014-01-16 Thread Jun Rao
Right, I don't think it's currently possible. Thanks, Jun On Wed, Jan 15, 2014 at 10:50 PM, Vadim Keylis wrote: > Hi Jun. Just to be clear. Its not possible on single node that has two NIC > cards(each NIC card assign its own ip address) to route traffic for > producer through one NIC card and

Re: High level consumer does not consumes in 0.8.0 version

2014-01-16 Thread Rob Withers
We need to try the one with filters, thanks. We found that if you commitOffsets, after each msg, it works. The IO is not bad. - charlie > On Jan 16, 2014, at 8:14 AM, Hussain Pirosha > wrote: > > Hello, > > While running the high level consumer mentioned on > https://cwiki.apache.org/conf

High level consumer does not consumes in 0.8.0 version

2014-01-16 Thread Hussain Pirosha
Hello, While running the high level consumer mentioned on https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example using 0.8.0 release. The consumer does not receive any messages and blocks on stream.hasNext() call. I have pasted the client side logs at https://gist.github.co