Sorry, I realize now what I am observing here was discussed in a previous
thread. Although I'm a bit different, in that I'm just trying to use
straight maven (no sbt or gradle, etc.). Anyway, the pom in maven central
is invalid, and should probably be removed, I should think.
Jason
On Wed, Jul
I have been using a pom file for 0.8.0 that I hand-edited from the one
generated with sbt make:pom. Now that there's a version up on maven
central, I'm trying to use that.
It looks like the pom file hosted now on maven central, is invalid for
maven?
I'm looking at this:
http://search.maven.org/r
Is this 0.8? If so, the broker list is only used for retrieving metadata.
The producer then connects to the broker using the returned metadata info.
The hostname in the metadata depends on the OS setting for the host (see
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-OnEC2%2Cwhycan%27tm
Hi,
I'm using Kafka of Java client to send events to my data center, but found that
the producer called localhost: 9092.
Actually , my broker list doesn't include localhost. why ?
Thanks.
Best Regards,
yanbo.ai
Samir,
When you observe that the consumer hangs, could you send around the thread
dump. Also can you check the value of the FetchRate mbean ?
Thanks,
Neha
On Tue, Jul 23, 2013 at 10:17 AM, Samir Madhavan <
samir.madha...@fluturasolutions.com> wrote:
> We just observed that the consumer is occu
I think I found the answer:
http://mail-archives.apache.org/mod_mbox/incubator-kafka-users/201204.mbox/%3ccafbh0q3bxaakybq1_yuhhukkhxx4rbqzpaa2pkr4u9+m4vy...@mail.gmail.com%3E
As describe in the link above it should append only one time and after
parent was created the there are any exceptions oc
We just observed that the consumer is occupying almost complete RAM which
is 4Gb.
On Tue, Jul 23, 2013 at 10:25 PM, Samir Madhavan <
samir.madha...@fluturasolutions.com> wrote:
> One of the common pattern we observed when we ran consumer from start
> multiple times is that it catches up till the
One of the common pattern we observed when we ran consumer from start
multiple times is that it catches up till the data that producer has
produced and then it just gets hung. If we kill it and then start it again,
it starts consuming and then again it just gets hung. We have added the try
catch st
Ok , got it , so the problem actually came from zookeeper. Can someone
pointing me how can I clean up zookeeper to get rid of these messages.
Thanks
Oleg.
On Tue, Jul 23, 2013 at 12:38 PM, Neha Narkhede wrote:
> These info messages show up when Kafka tries to create new consumer groups.
> While
These info messages show up when Kafka tries to create new consumer groups.
While trying to create the children of /consumers/[group], if the parent
path doesn't exist, the zookeeper server logs these messages. Kafka
internally handles these cases correctly by first creating the parent node.
Thank
If the console producer/consumer works fine, it would be safe to assume the
broker is up.
Thanks,
Neha
On Tue, Jul 23, 2013 at 8:44 AM, Oleg Ruchovets wrote:
> Hi Jun ,
>
>I made such tests:
> *bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic
> test
> *
>
> Thi
Changed my tests with different consuming groups but still have the same
logs with Error:
[2013-07-23 19:25:19,439] INFO Got user-level KeeperException when
processing sessionid:0x1400bd6e296000c type:create cxid:0x15
zxid:0xfffe txntype:unknown reqpath:n/a Error
Path:/consumers/group1
Hi Jun ,
I made such tests:
*bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic test
*
This is a message
This is another message
*> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic
test --from-beginning*
This is a message
Right now, we just have what's in the command line help.
Thanks,
Jun
On Tue, Jul 23, 2013 at 8:19 AM, Jason Rosenberg wrote:
> Thanks.
>
> Are there instructions for how to run it?
>
> Jason
>
>
> On Tue, Jul 23, 2013 at 12:51 AM, Jun Rao wrote:
>
> > You can try kafka-reassign-partitions no
Not sure about the error. However, your consumer seems to be lagging. Are
the consumer offsets moving at all? If not, are your consumer threads still
alive? You probably want to try/catch the consumer code to see if there is
any unexpected exceptions.
Thanks,
Jun
On Tue, Jul 23, 2013 at 7:58 AM
Thanks.
Are there instructions for how to run it?
Jason
On Tue, Jul 23, 2013 at 12:51 AM, Jun Rao wrote:
> You can try kafka-reassign-partitions now. You do have to specify the new
> replica assignment manually. We are improving that tool to make it more
> automatic.
>
> Thanks,
>
> Jun
>
>
>
Those exception are ok since they are at the info level. Is the broker
running ok otherwise?
Thanks,
Jun
On Tue, Jul 23, 2013 at 7:46 AM, Oleg Ruchovets wrote:
> Hi All.
>
>I have on one machine kafka installation. I needed to move it to another
> machine and I copied a kafka folder to th
After consuming about 5 gb of messages , it stopped consuming and it got
stuck at a particular offset. After running the following command I got an
error regarding the broker.
$ bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group
air-DummyProductionConsumerGroup-After --zkconnect 1
Hi All.
I have on one machine kafka installation. I needed to move it to another
machine and I copied a kafka folder to that machine.
when I started kafka in new machine I got such output:
[2013-07-23 17:03:29,858] INFO Got user-level KeeperException when
processing sessionid:0x1400bd6e29600
Thanks for the confirmation Jun.
On Jul 23, 2013 12:54 AM, "Jun Rao" wrote:
> Yes, the kafka-request log logs every request (in TRACE). It's mostly for
> debugging purpose. Other than that, there is no harm to turn it off.
>
> Thanks,
>
> Jun
>
>
> On Mon, Jul 22, 2013 at 7:59 PM, Calvin Lei wr
20 matches
Mail list logo