raised a StckOverlow question for this -
> http://stackoverflow.com/questions/36542139/kafka-high-level-consumer-not-consuming-messages-socket-closure-issue
> .
>
> What could be the problem here?
>
> Regards,
> Nishant
>
>
a StckOverlow question for this -
http://stackoverflow.com/questions/36542139/kafka-high-level-consumer-not-consuming-messages-socket-closure-issue
.
What could be the problem here?
Regards,
Nishant
d,
> Bracknell
> RG12 1HN
> UK
>
>
>
> -Original Message-
> From: Kudumula, Surender
> Sent: 24 November 2015 10:07
> To: users@kafka.apache.org
> Subject: Java consumer not consuming messages whereas kafka command line
> client consumes all the messages
: 24 November 2015 10:07
To: users@kafka.apache.org
Subject: Java consumer not consuming messages whereas kafka command line client
consumes all the messages
Hi all
I have been trying to think why its happening. Can anyone point me to the right
direction in terms of the config iam missing. Produce
Hi all
I have been trying to think why its happening. Can anyone point me to the right
direction in terms of the config iam missing. Producer is on another node in
the same cluster and consumer is on different node. But as I said the command
line client works and consumes all the messages. If I
"Rebalancing attempt failed" indicates the rebalancing failed. I added some
notes in the last item in
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped,why
?
Thanks,
Jun
On Fri, Apr 11, 2014 at 11:23 PM, Arjun wrote:
> Even after changing the fetch wait m
Even after changing the fetch wait max ms the same thing is repeting
just that some partitions have the owners now, i mean
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group group1
--zkconnect zkhost:zkport --topic testtopic
Group Topic Pid Off
Yup will try that
On Apr 12, 2014 8:42 AM, "Jun Rao" wrote:
> Console consumer also uses the high level consumer. Could you try setting
> fetch.wait.max.ms to 100ms?
>
> Thanks,
>
> Jun
>
>
> On Fri, Apr 11, 2014 at 9:56 AM, Arjun Kota wrote:
>
> > Console consumer works fine. Its the high level
Console consumer also uses the high level consumer. Could you try setting
fetch.wait.max.ms to 100ms?
Thanks,
Jun
On Fri, Apr 11, 2014 at 9:56 AM, Arjun Kota wrote:
> Console consumer works fine. Its the high level java consumer which is
> giving this problem.
>
> Thanks
> Arjun narasimha kot
mmit.offset" is false.
>
> -Original Message-
> From: Arjun Kota [mailto:ar...@socialtwist.com]
> Sent: Friday, April 11, 2014 10:56 AM
> To: users@kafka.apache.org
> Subject: Re: consumer not consuming messages
>
> Console consumer works fine. Its the high level java c
Are you committing offsets manually after you consume as you mentioned earlier
that "auto.commit.offset" is false.
-Original Message-
From: Arjun Kota [mailto:ar...@socialtwist.com]
Sent: Friday, April 11, 2014 10:56 AM
To: users@kafka.apache.org
Subject: Re: consumer not
Console consumer works fine. Its the high level java consumer which is
giving this problem.
Thanks
Arjun narasimha kota
On Apr 11, 2014 8:42 PM, "Jun Rao" wrote:
> We may have a bug that doesn't observe etch.min.bytes accurately. So a
> lower fetch.wait.max.ms will improve consumer latency.
>
>
We may have a bug that doesn't observe etch.min.bytes accurately. So a
lower fetch.wait.max.ms will improve consumer latency.
Could you run a console consumer and see if you have the same issue? That
will tell us if this is a server side issue or an issue just in your
consumer.
Thanks,
Jun
On
i changed the time to 60 seconds even now i see the same result. The
Consumer is not consuming the messages.
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 10:36 AM, Arjun wrote:
yup i will change the value and recheck. Thanks for the help.
thanks
Arjun Narasimha Kota
On Friday 11 April
yup i will change the value and recheck. Thanks for the help.
thanks
Arjun Narasimha Kota
On Friday 11 April 2014 10:28 AM, Guozhang Wang wrote:
What I tried to say is that it may be caused by your
"fetch.wait.max.ms"="18"
too large. Try a small value and see if that helps.
On Thu, Apr 10
Hi,
From my understanding, the fetch wait max time is the maximum time the
consumer waits if there are no messages in the broker. If there are
messages in the broker, it just gets all the messages from the broker.Is
my understanding wrong?
thanks
Arjun Narasimha Kota
On Friday 11 April 2014
What I tried to say is that it may be caused by your
"fetch.wait.max.ms"="18"
too large. Try a small value and see if that helps.
On Thu, Apr 10, 2014 at 9:44 PM, Arjun wrote:
> Hi,
>
> I could not see any out of memory exceptions in the broker logs. One thing
> i can see is i may have con
Hi,
I could not see any out of memory exceptions in the broker logs. One
thing i can see is i may have configured consumer poorly. If its not
too much to ask can u let me know the changes i have to do for over
coming this problem.
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 10:04 A
Hi Ajrun,
It seems to be the cause:
https://issues.apache.org/jira/browse/KAFKA-1016
Guozhang
On Thu, Apr 10, 2014 at 9:21 PM, Arjun wrote:
> I hope this one would give u a better idea.
>
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group group1
> --zkconnect zkhost:port --
i see this in the consumer logs
[kafka.consumer.ConsumerFetcherManager]
[ConsumerFetcherManager-1397188062631] Adding fetcher for partition
[taf.referral.emails.service,11], initOffset 250 to broker 1 with
fetcherId 0
but no data and i get this warning
[ConsumerFetcherThread-group1_ip-10-91-
I hope this one would give u a better idea.
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group group1
--zkconnect zkhost:port --topic testtopic
Group Topic Pid Offset
logSize Lag Owner
group1 testtopic0 253
Are you using the high-level consumer? How did you set fetch.wait.max.msand
fetch.min.bytes?
Thanks,
Jun
On Thu, Apr 10, 2014 at 8:13 PM, Arjun wrote:
> This just dosent happen only when the topic is newly created, happens even
> if topic has lot of messgaes, but all messages are consumed by
This just dosent happen only when the topic is newly created, happens
even if topic has lot of messgaes, but all messages are consumed by the
consumer. Now if you add just one message, consumer will not fetch, if
at that scenario we add more than 10 messages things work fine. (10 is
just a arbi
The consumer uses do specific topics.
On Apr 11, 2014 6:23 AM, "Arjun Kota" wrote:
> Yes the message shows up on the server.
> On Apr 11, 2014 12:07 AM, "Guozhang Wang" wrote:
>
>> Hi Arjun,
>>
>> If you only send one message, does that message show up on the server?
>> Does
>> you consumer use
Yes the message shows up on the server.
On Apr 11, 2014 12:07 AM, "Guozhang Wang" wrote:
> Hi Arjun,
>
> If you only send one message, does that message show up on the server? Does
> you consumer use wildcard topics or specific topics?
>
> Guozhang
>
>
> On Thu, Apr 10, 2014 at 9:20 AM, Arjun wr
Hi Arjun,
If you only send one message, does that message show up on the server? Does
you consumer use wildcard topics or specific topics?
Guozhang
On Thu, Apr 10, 2014 at 9:20 AM, Arjun wrote:
> But we have auto offset reset to smallest not largest, even then this
> issue arises? If so is t
But we have auto offset reset to smallest not largest, even then this
issue arises? If so is there any work around?
Thanks
Arjun NArasimha Kota
On Thursday 10 April 2014 09:39 PM, Guozhang Wang wrote:
It could be https://issues.apache.org/jira/browse/KAFKA-1006.
Guozhang
On Thu, Apr 10, 20
It could be https://issues.apache.org/jira/browse/KAFKA-1006.
Guozhang
On Thu, Apr 10, 2014 at 8:50 AM, Arjun wrote:
> its auto created
> but even after topic creation this is the scenario
>
> Arjun
>
> On Thursday 10 April 2014 08:41 PM, Guozhang Wang wrote:
>
>> Hi Arjun,
>>
>> Did you manua
its auto created
but even after topic creation this is the scenario
Arjun
On Thursday 10 April 2014 08:41 PM, Guozhang Wang wrote:
Hi Arjun,
Did you manually create the topic or use auto.topic.creation?
Guozhang
On Thu, Apr 10, 2014 at 7:39 AM, Arjun wrote:
Hi,
We have 3 node kafka 0.8 s
Hi Arjun,
Did you manually create the topic or use auto.topic.creation?
Guozhang
On Thu, Apr 10, 2014 at 7:39 AM, Arjun wrote:
> Hi,
>
> We have 3 node kafka 0.8 setup with zookeepers ensemble. We use high level
> consumer with auto commit offset false. I am facing some peculiar problem
> wit
Hi,
We have 3 node kafka 0.8 setup with zookeepers ensemble. We use high
level consumer with auto commit offset false. I am facing some peculiar
problem with kafka. When i send some 10-20 messages or so the consumer
starts to consume the messages. But if i send only one message to
kafka, the
I don't see the log in your email. Perhaps you can send out a link to
things like pastebin?
Thanks,
Jun
On Thu, Feb 13, 2014 at 8:06 AM, Arjun Kota wrote:
> Yes i have made it to trace as it will help me debug the things.
> Have u found any issue in the it.
> On Feb 13, 2014 9:12 PM, "Jun Rao
Yes i have made it to trace as it will help me debug the things.
Have u found any issue in the it.
On Feb 13, 2014 9:12 PM, "Jun Rao" wrote:
> The request log is in trace. Take a look at the log4j property file in
> config/.
>
> Thanks,
>
> Jun
>
>
> On Wed, Feb 12, 2014 at 9:45 PM, Arjun wrote:
The request log is in trace. Take a look at the log4j property file in
config/.
Thanks,
Jun
On Wed, Feb 12, 2014 at 9:45 PM, Arjun wrote:
> I am sorry but could not locate the offset in the request log. I have
> turned on the debug for the logs but couldnt . Do you know any pattern with
> whi
The set up has 3 kafka brokers running on 3 different ec2 nodes (I added
the host.name in broker config). I am not committing any messages in my
consumer. The consumer is exact replica of the ConsumerGroupExample.
The test machine (10.60.15.123) is outside these systems security group
but has
I am sorry but could not locate the offset in the request log. I have
turned on the debug for the logs but couldnt . Do you know any pattern
with which i can look in.
Thanks
Arjun Narasimha Kota
On Wednesday 12 February 2014 09:26 PM, Jun Rao wrote:
Interesting. So you have 4 messages in the
Hi,
No i havent changed the auto commit enable. That one message is the one
which got earlier long time back(2 weeks back). After that i started
working recently and things started behaving werid.
I dont have the request log now, will check and let u know.
Thanks
Arjun narasimha k
On Feb 12, 20
Interesting. So you have 4 messages in the broker. The checkpointed offset
for the consumer is at the 3rd message. Did you change the default setting
of auto.commit.enable? Also, if you look at the
request log, what's the offset in the fetch request from this consumer?
Thanks,
Jun
On Tue, Feb 11,
The topic name is correct, the o/p of the ConsumerOffserChecker is
arjunn@arjunn-lt:~/Downloads/Kafka0.8/new/kafka_2.8.0-0.8.0$
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group group1
--zkconnect 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183 --topic
taf.referral.emails.service
G
Could you double check that you used the correct topic name? If so, could
you run ConsumerOffsetChecker as described in
https://cwiki.apache.org/confluence/display/KAFKA/FAQ and see if there is
any lag?
Thanks,
Jun
On Tue, Feb 11, 2014 at 8:45 AM, Arjun Kota wrote:
> fetch.wait.max.ms=1
>
fetch.wait.max.ms=1
fetch.min.bytes=128
My message size is much more than that.
On Feb 11, 2014 9:21 PM, "Jun Rao" wrote:
> What's the fetch.wait.max.ms and fetch.min.bytes you used?
>
> Thanks,
>
> Jun
>
>
> On Tue, Feb 11, 2014 at 12:54 AM, Arjun wrote:
>
> > With the same group id from t
What's the fetch.wait.max.ms and fetch.min.bytes you used?
Thanks,
Jun
On Tue, Feb 11, 2014 at 12:54 AM, Arjun wrote:
> With the same group id from the console consumer its working fine.
>
>
> On Tuesday 11 February 2014 01:59 PM, Guozhang Wang wrote:
>
>> Arjun,
>>
>> Are you using the same
With the same group id from the console consumer its working fine.
On Tuesday 11 February 2014 01:59 PM, Guozhang Wang wrote:
Arjun,
Are you using the same group name for the console consumer and the java
consumer?
Guozhang
On Mon, Feb 10, 2014 at 11:38 PM, Arjun wrote:
Hi Jun,
No its no
nope i will try that. thanks for suggesting
On Tuesday 11 February 2014 01:59 PM, Guozhang Wang wrote:
Arjun,
Are you using the same group name for the console consumer and the java
consumer?
Guozhang
On Mon, Feb 10, 2014 at 11:38 PM, Arjun wrote:
Hi Jun,
No its not that problem. I am no
Arjun,
Are you using the same group name for the console consumer and the java
consumer?
Guozhang
On Mon, Feb 10, 2014 at 11:38 PM, Arjun wrote:
> Hi Jun,
>
> No its not that problem. I am not getting what the problem is can you
> please help.
>
> thanks
> Arjun Narasimha Kota
>
>
> On Monday
Hi Jun,
No its not that problem. I am not getting what the problem is can you
please help.
thanks
Arjun Narasimha Kota
On Monday 10 February 2014 09:10 PM, Jun Rao wrote:
Does
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whydoesmyconsumernevergetanydata?
apply?
Thanks,
Jun
O
Does
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whydoesmyconsumernevergetanydata?
apply?
Thanks,
Jun
On Sun, Feb 9, 2014 at 10:27 PM, Arjun wrote:
> Hi,
>
> I started using kafka some time back. I was experimenting with 0.8. My
> problem is the kafka is unable to consume the me
On extension to the same problem i am seeing this "INFO Closing socket
connection to /127.0.0.1. (kafka.network.Processor)" in my log
continuously. I searched the web and found this code in an exception
block "
https://apache.googlesource.com/kafka/+/40a80fa7b7ae3d49e32c40fbaad1a4e402b2ac71/cor
48 matches
Mail list logo