Hey Joel
I can see a ConsumerFetcherThread in the dump, I include the full dump
this time as an attachment in case it proves useful to you.
Thanks for all the help
Pablo
On Fri, Jul 25, 2014 at 7:30 PM, Joel Koshy wrote:
> Did you see any fetcher threads in the thread dump? If not it seem
Did you see any fetcher threads in the thread dump? If not it seems
they may have exited for some reason and the iterators are blocked on
receiving data.
On Fri, Jul 25, 2014 at 12:50:13PM +0100, Pablo Picko wrote:
> Hey Joel
>
> I actually did issue a kill -3 to get a view on the consumer at the
Hey Joel
I actually did issue a kill -3 to get a view on the consumer at the time of
the issue. I have just found the output I had 20 threads and all of them
look like the following. I think it looks Ok.
2014/07/24 00:24:03 | "pool-2-thread-20" prio=3D10 =
tid=3D0x7f55f4764800 nid=3D0x76b1 wa
Hey Joe
Thanks for the info I have found out that my logger was misconfigured I've
redeployed now with the proper settings and hopefully I can catch the
proper error messages. I can confirm I'm seeing the appropriate log detail
now.
Thanks
Pablo
On Thu, Jul 24, 2014 at 8:47 PM, Joe Stein wrote
Pablo, if you see this again, can you take a thread-dump of your
consumer and verify that the fetchers to all the brokers are still
alive as well as the corresponding iterator threads? It could be that
your consumer ran into some decoder error or some other exception
(although in general that shoul
For the consumer you should see logs like
"Connecting to zookeeper instance at " + config.zkConnect
"begin registering consumer " + consumerIdString + " in ZK
consumerThreadId + " successfully owned partition " + partition + " for
topic " + topic
"starting auto committer every " + config.autoCommi
Hey guys..
I have my my log level set to info, saying that I am not seeing much logs
at all for kafka on startup i see detail about the serializer.class my
producer uses but very little consumer related logs is there anything I
should always see if my log config is correct for the info level
In r
What is the value for what you are setting for your number of streams when
calling createMessageStreamsByFilter or if using createMessageStreams for
the TopicCount ( topic -> numberOfStreams )?
How are you threading the iterator on each stream?
/***
Joe St
Hmm, that is a bit wired. Did you make sure the consumer logs are turned at
least on the INFO level?
Guozhang
On Thu, Jul 24, 2014 at 10:05 AM, Pablo Picko wrote:
> Guozhang
>
> I didn't no. I did spot other people with similar symptoms to my problem
> mentioning your suggestion too but I don'
Guozhang
I didn't no. I did spot other people with similar symptoms to my problem
mentioning your suggestion too but I don't see anything in the log to
suggest it rebalanced. It could very well be the reason but I can't see
anything suggesting it is yet.
Thanks
Pablo
On 24 Jul 2014 17:57, "Guozha
Pablo,
Do you see any rebalance related logs in consumers?
Guozhang
On Thu, Jul 24, 2014 at 9:02 AM, Pablo Picko wrote:
> Hey Guozhang
>
> Thanks for the reply, No nothing at all in the logs to suggest anything
> went wrong.
>
> Its really puzzling as to what's happened. When I restarted the
Hey Guozhang
Thanks for the reply, No nothing at all in the logs to suggest anything
went wrong.
Its really puzzling as to what's happened. When I restarted the consumer
everything worked again.
Prior to the restart I even stopped the producer for a bit. However any
messages that got assigned to
Hi Pablo,
During the period did you see any exception/errors on Broker C's logs and
the consumer logs also?
Guozhang
On Thu, Jul 24, 2014 at 6:23 AM, Pablo Picko wrote:
> Hello all
>
> Some background.
>
> I have a 3 kafka brokers A,B and C, there is a kafka topic called topic
> with 20 parti
13 matches
Mail list logo