Re: invalid pom in maven central, for 0.8.0-beta1

2013-07-23 Thread Jason Rosenberg
Sorry, I realize now what I am observing here was discussed in a previous thread. Although I'm a bit different, in that I'm just trying to use straight maven (no sbt or gradle, etc.). Anyway, the pom in maven central is invalid, and should probably be removed, I should think. Jason On Wed, Jul

invalid pom in maven central, for 0.8.0-beta1

2013-07-23 Thread Jason Rosenberg
I have been using a pom file for 0.8.0 that I hand-edited from the one generated with sbt make:pom. Now that there's a version up on maven central, I'm trying to use that. It looks like the pom file hosted now on maven central, is invalid for maven? I'm looking at this: http://search.maven.org/r

Re: found that the producer called localhost: 9092 in Kafka of Java client.

2013-07-23 Thread Jun Rao
Is this 0.8? If so, the broker list is only used for retrieving metadata. The producer then connects to the broker using the returned metadata info. The hostname in the metadata depends on the OS setting for the host (see https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-OnEC2%2Cwhycan%27tm

found that the producer called localhost: 9092 in Kafka of Java client.

2013-07-23 Thread yanbo . ai
Hi, I'm using Kafka of Java client to send events to my data center, but found that the producer called localhost: 9092. Actually , my broker list doesn't include localhost. why ? Thanks. Best Regards, yanbo.ai

Re: Consumer stops consuming after 5 gb

2013-07-23 Thread Neha Narkhede
Samir, When you observe that the consumer hangs, could you send around the thread dump. Also can you check the value of the FetchRate mbean ? Thanks, Neha On Tue, Jul 23, 2013 at 10:17 AM, Samir Madhavan < samir.madha...@fluturasolutions.com> wrote: > We just observed that the consumer is occu

Re: clean up kafka environmet

2013-07-23 Thread Oleg Ruchovets
I think I found the answer: http://mail-archives.apache.org/mod_mbox/incubator-kafka-users/201204.mbox/%3ccafbh0q3bxaakybq1_yuhhukkhxx4rbqzpaa2pkr4u9+m4vy...@mail.gmail.com%3E As describe in the link above it should append only one time and after parent was created the there are any exceptions oc

Re: Consumer stops consuming after 5 gb

2013-07-23 Thread Samir Madhavan
We just observed that the consumer is occupying almost complete RAM which is 4Gb. On Tue, Jul 23, 2013 at 10:25 PM, Samir Madhavan < samir.madha...@fluturasolutions.com> wrote: > One of the common pattern we observed when we ran consumer from start > multiple times is that it catches up till the

Re: Consumer stops consuming after 5 gb

2013-07-23 Thread Samir Madhavan
One of the common pattern we observed when we ran consumer from start multiple times is that it catches up till the data that producer has produced and then it just gets hung. If we kill it and then start it again, it starts consuming and then again it just gets hung. We have added the try catch st

Re: clean up kafka environmet

2013-07-23 Thread Oleg Ruchovets
Ok , got it , so the problem actually came from zookeeper. Can someone pointing me how can I clean up zookeeper to get rid of these messages. Thanks Oleg. On Tue, Jul 23, 2013 at 12:38 PM, Neha Narkhede wrote: > These info messages show up when Kafka tries to create new consumer groups. > While

Re: clean up kafka environmet

2013-07-23 Thread Neha Narkhede
These info messages show up when Kafka tries to create new consumer groups. While trying to create the children of /consumers/[group], if the parent path doesn't exist, the zookeeper server logs these messages. Kafka internally handles these cases correctly by first creating the parent node. Thank

Re: clean up kafka environmet

2013-07-23 Thread Neha Narkhede
If the console producer/consumer works fine, it would be safe to assume the broker is up. Thanks, Neha On Tue, Jul 23, 2013 at 8:44 AM, Oleg Ruchovets wrote: > Hi Jun , > >I made such tests: > *bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic > test > * > > Thi

Re: clean up kafka environmet

2013-07-23 Thread Oleg Ruchovets
Changed my tests with different consuming groups but still have the same logs with Error: [2013-07-23 19:25:19,439] INFO Got user-level KeeperException when processing sessionid:0x1400bd6e296000c type:create cxid:0x15 zxid:0xfffe txntype:unknown reqpath:n/a Error Path:/consumers/group1

Re: clean up kafka environmet

2013-07-23 Thread Oleg Ruchovets
Hi Jun , I made such tests: *bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic test * This is a message This is another message *> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning* This is a message

Re: Replacing brokers in a cluster (0.8)

2013-07-23 Thread Jun Rao
Right now, we just have what's in the command line help. Thanks, Jun On Tue, Jul 23, 2013 at 8:19 AM, Jason Rosenberg wrote: > Thanks. > > Are there instructions for how to run it? > > Jason > > > On Tue, Jul 23, 2013 at 12:51 AM, Jun Rao wrote: > > > You can try kafka-reassign-partitions no

Re: Consumer stops consuming after 5 gb

2013-07-23 Thread Jun Rao
Not sure about the error. However, your consumer seems to be lagging. Are the consumer offsets moving at all? If not, are your consumer threads still alive? You probably want to try/catch the consumer code to see if there is any unexpected exceptions. Thanks, Jun On Tue, Jul 23, 2013 at 7:58 AM

Re: Replacing brokers in a cluster (0.8)

2013-07-23 Thread Jason Rosenberg
Thanks. Are there instructions for how to run it? Jason On Tue, Jul 23, 2013 at 12:51 AM, Jun Rao wrote: > You can try kafka-reassign-partitions now. You do have to specify the new > replica assignment manually. We are improving that tool to make it more > automatic. > > Thanks, > > Jun > > >

Re: clean up kafka environmet

2013-07-23 Thread Jun Rao
Those exception are ok since they are at the info level. Is the broker running ok otherwise? Thanks, Jun On Tue, Jul 23, 2013 at 7:46 AM, Oleg Ruchovets wrote: > Hi All. > >I have on one machine kafka installation. I needed to move it to another > machine and I copied a kafka folder to th

Consumer stops consuming after 5 gb

2013-07-23 Thread Anurup Raveendran
After consuming about 5 gb of messages , it stopped consuming and it got stuck at a particular offset. After running the following command I got an error regarding the broker. $ bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group air-DummyProductionConsumerGroup-After --zkconnect 1

clean up kafka environmet

2013-07-23 Thread Oleg Ruchovets
Hi All. I have on one machine kafka installation. I needed to move it to another machine and I copied a kafka folder to that machine. when I started kafka in new machine I got such output: [2013-07-23 17:03:29,858] INFO Got user-level KeeperException when processing sessionid:0x1400bd6e29600

Re: Recommended log level in prod environment.

2013-07-23 Thread Calvin Lei
Thanks for the confirmation Jun. On Jul 23, 2013 12:54 AM, "Jun Rao" wrote: > Yes, the kafka-request log logs every request (in TRACE). It's mostly for > debugging purpose. Other than that, there is no harm to turn it off. > > Thanks, > > Jun > > > On Mon, Jul 22, 2013 at 7:59 PM, Calvin Lei wr