:54)
at kafka.network.Processor.read(SocketServer.scala:445)
at kafka.network.Processor.run(SocketServer.scala:341)
at java.lang.Thread.run(Thread.java:745)
I have written a simple Kafka High Level consumer. I have not specified any
value for the
I want to know how to tune/setup high level Kafka Client to a Kafka server in
EC2 I set zookeeper.session.timeout.ms=5. I found that after some time I
got following error in the logs. I want to know how to tune Kafka parameters to
run the consumer for ever. I checked and found ZK is running
Hi
Our kafka consumer application has been running for a week without any problems.
But I face to OOME while trying to consume from one topic 100 partitions by 100
consumers today.
The configurations for the consumers are there:
zookeeper.session.timeout.ms = 1
zookeeper.sync.time.ms = 200
au
Can you confirm that you are not actually seeing the messages on the
lagging broker ?
Because if the Max Lag is 0 it should mean that consumer has read the
offsets till log end offset of the broker.
Thanks,
Mayuresh
On Fri, Jul 10, 2015 at 8:29 PM, Allen Wang
wrote:
> We have two applications
We have two applications that consume all messages from one Kafka cluster.
We found that the MessagesPerSec metric started to diverge after some time.
One of them matches the MessagesInPerSec metric from the Kafka broker,
while the other is lower than the broker metric and appears to have some
mess
Consider the high level consumer example
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example.
Using JAVA API, Is it possible to fetch the consumer id for this
particular consumer as displayed in ConsumerOffsetCheck "Owner" field?
Thanks,
Rahul.
Hello Dima,
The current consumer does not have explicit memory control mechanism, but
you can try to indirectly bound the memory usage via the following configs:
fetch.message.max.bytes and queued.max.message.chunks. Details can be found
at http://kafka.apache.org/documentation.html#consumerconfig
Hi
I face to OOME while trying to consume from one topic 10 partitions (100
000 messages each partition) 5 consumers(consumer groups),
consumer.timeout=10ms. OOME was gotten after 1-2 minutes after start.
Java heap - Xms=1024M
LAN about 10Gbit
This is standalone application.
Kafka version 0.8.2
Rs to make it better.
>
> -Joe Lawson
>
>
> From: Pranay Agarwal
> Sent: Wednesday, February 25, 2015 1:45 AM
> To: users@kafka.apache.org
> Subject: Re: Kafka High Level Consumer
>
> Thanks Jun. It seems it was an issue with jruby client I was using. Now,
> they fix
ubject: Re: Kafka High Level Consumer
Thanks Jun. It seems it was an issue with jruby client I was using. Now,
they fixed it.
-Pranay
On Mon, Feb 23, 2015 at 4:57 PM, Jun Rao wrote:
> Did you enable auto offset commit?
>
> Thanks,
>
> Jun
>
> On Tue, Feb 17, 2015 at 4:22 PM, Pr
Thanks Jun. It seems it was an issue with jruby client I was using. Now,
they fixed it.
-Pranay
On Mon, Feb 23, 2015 at 4:57 PM, Jun Rao wrote:
> Did you enable auto offset commit?
>
> Thanks,
>
> Jun
>
> On Tue, Feb 17, 2015 at 4:22 PM, Pranay Agarwal
> wrote:
>
> > Hi,
> >
> > I am trying to
Did you enable auto offset commit?
Thanks,
Jun
On Tue, Feb 17, 2015 at 4:22 PM, Pranay Agarwal
wrote:
> Hi,
>
> I am trying to read kafka consumer using high level kafka Consumer API. I
> had to restart the consumers for some reason but I kept the same group id.
> It seems the consumers have s
Hi,
I am trying to read kafka consumer using high level kafka Consumer API. I
had to restart the consumers for some reason but I kept the same group id.
It seems the consumers have started consuming from the beginning (0 offset)
instead from the point they had already consumed.
What am I doing wr
Hi,
you are programatically shutting down the executor after 10 seconds
try {
Thread.sleep(1);
} catch (InterruptedException ie) {
}
example.shutdown();
if you do not execute this code your threads will run forever.
Davide B.
--
Thanks Joe Stein
This worked :)
On Fri, Sep 12, 2014 at 3:19 PM, Rahul Mittal
wrote:
> Hi ,
> Is there a way in kafka to read data from all topics, from a consumer
> group without specifying topics in a dynamic way.
> That is if new topics are created on kafka brokers the consumer group
> should
You want to use the createMessageStreamsByFilter and pass in a WhiteList
with a regex that would include everything you want... here is e.g. how to
use that
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/ConsoleConsumer.scala#L196
/**
Hi ,
Is there a way in kafka to read data from all topics, from a consumer group
without specifying topics in a dynamic way.
That is if new topics are created on kafka brokers the consumer group
should figure it out and start reading from the new topic as well without
explicitly defining new topic
Hi Josh,
Yes. The comsumption distribution at Kafka is at the granularity of
partitions, i.e. each partition will only be consumed by one consumer
within the group.
Guozhang
On Tue, Aug 19, 2014 at 2:01 AM, Josh J wrote:
> Hi,
>
> For the kafka high level consumer, if I create ex
Hi,
For the kafka high level consumer, if I create exactly the number of
threads as the number of partitions, is there a guarantee that each thread
will be the only thread that reads from a particular partition? I'm
following this example
<https://github.com/bingoohuang/java-sand
We have 3 node cluster separate physical box for consumer group and
consumer that died "mupd_logmon_hb_
events_sdc-q1-logstream-8-1402448850475-6521f70a". On the box, I show the
above Exception. What can I configure such way, that when a partition in
COnsumer Group does not have "Owner" other
>From which consumer instance did you see these exceptions?
Guozhang
On Thu, Jun 12, 2014 at 4:39 PM, Bhavesh Mistry
wrote:
> Hi Kafka Dev Team/ Users,
>
> We have high level consumer group consuming from 32 partitions for a
> topic. We have been running 48 consumers in this group across mu
Hi Kafka Dev Team/ Users,
We have high level consumer group consuming from 32 partitions for a
topic. We have been running 48 consumers in this group across multiple
servers. We have kept 16 as back-up consumers, and hoping when the
consumer dies, meaning when Zookeeper does not have an owner
ms =
> consumerMap.get(topic);*
>
>
>
> *// now launch all the threads*
>
> *//*
>
> *executor =
> Executors.newFixedThreadPool(a_numThreads);*
>
>
>
> *// now create an object to consume the messages*
>
> *
// now launch all the threads*
>
> *//*
>
> *executor =
> Executors.newFixedThreadPool(a_numThreads);*
>
>
>
> *// now create an object to consume the messages*
>
> *//*
>
Session termination can happen either when client or zookeeper process
pauses (due to GC) or when the client process terminates. A sustainable
solution is to tune GC settings. For now, you can try increasing the
zookeeper.session.timeout.ms.
On Sun, Mar 9, 2014 at 3:44 PM, Ameya Bhagat wrote:
I am using a high level consumer as described at:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
I am noticing that my consumer does not run forever and ends after some
time (< 15s). At the zookeeper side, I see the following:
INFO Processed session termination for sessi
ck to
either most current or oldest message offset.
But other's more experienced opinion on this will be great.
Regards,
Pushkar
On Feb 14, 2014 4:40 PM, wrote:
> Good Morning,
>
> I am testing the Kafka High Level Consumer using the ConsumerGroupExample
> code from the Kafka
Good Morning,
I am testing the Kafka High Level Consumer using the ConsumerGroupExample code
from the Kafka site. I would like to retrieve all the existing messages on the
topic called "test" that I have in the Kafka server config. Looking at other
blogs, auto.offset.reset should
28 matches
Mail list logo