For those partitions that are lagging, do you see fetch requests in the log?
Thanks,
Jun
On Fri, Jul 19, 2013 at 12:30 AM, Nihit Purwar wrote:
> Hello Jun,
>
> Sorry for the delay in getting the logs.
> Here are the 3 logs from the 3 servers with trace level as suggested:
>
>
> https://docs.g
Hello Jun,
Sorry for the delay in getting the logs.
Here are the 3 logs from the 3 servers with trace level as suggested:
https://docs.google.com/file/d/0B5etsywBa-bkQnBESUJzNV9yRWc/edit?usp=sharing
Please have a look and let us know if you need anything else to further debug
this problem.
Tha
Hi Jun,
I did put in only one topic while starting the consumer and have used the same
API "createMessageStreams".
As for the trace level logs of kafka consumer, we will send that to you soon.
Thanks again for replying.
Nihit
On 10-Jul-2013, at 10:38 PM, Jun Rao wrote:
> Also, just so that w
Also, just so that we are on the same page. I assume that you used the
following api. Did you just put in one topic in the topicCountMap?
def createMessageStreams(topicCountMap: Map[String,Int]): Map[String,
List[KafkaStream[Array[Byte],Array[Byte
Thank,
Jun
On Wed, Jul 10, 2013 at 8:30 A
The weird part is this. If the consumers are consuming, the following
fetcher thread shouldn't be blocked on enqueuing the data. Could you turn
on TRACE level logging in kafka.server.KafkaRequestHandlers and if there is
any fetch requests issued to the broker when the consumer threads get stuck?
"
Hi Jun,
Thanks for helping out so far.
As per your explanation we are doing exactly as you have mentioned in your
workaround below.
> A workaround is to use different consumer connectors, each consuming a
> single topic.
Here is the problem...
We have a topic which gets a lot of events (arou
Ok. One of the issues is that when you have a consumer that consumes
multiple topics, if one of the consumer threads is slow in consuming
messages from one topic, it can block the consumption of other consumer
threads. This is because we use a shared fetcher to fetch all topics. There
is an in-memo
Hi Jun,
Please see my comments inline again :)
On 10-Jul-2013, at 9:13 AM, Jun Rao wrote:
> This indicates our in-memory queue is empty. So the consumer thread is
> blocked.
What should we do about this.
As I mentioned in the previous mail, events are there to be consumed.
Killing one consumer
This indicates our in-memory queue is empty. So the consumer thread is
blocked. What about the Kafka fetcher threads? Are they blocked on anything?
Thanks,
Jun
On Tue, Jul 9, 2013 at 8:37 AM, Nihit Purwar wrote:
> Hello Jun,
>
> Please see my comments inline.
>
> On 09-Jul-2013, at 8:32 PM, J
Hello Jun,
Please see my comments inline.
On 09-Jul-2013, at 8:32 PM, Jun Rao wrote:
> I assume that each consumer instance consumes all 15 topics.
No, we kept dedicated consumer listening to the topic in question.
We did this because this queue processes huge amounts of data.
> Are all your
I assume that each consumer instance consumes all 15 topics. Are all your
consumer threads alive? If one of your thread dies, it will eventually
block the consumption in other threads.
Thanks,
Jun
On Tue, Jul 9, 2013 at 4:18 AM, Nihit Purwar wrote:
> Hi,
>
> We are using kafka-0.7.2 with zook
Hi,
We are using kafka-0.7.2 with zookeeper (3.4.5)
Our cluster configuration:
3 brokers on 3 different machines. Each broker machine has a zookeeper instance
running as well.
We have 15 topics defined. We are trying to use them as queue (JMS like) by
defining the same group across different ka
12 matches
Mail list logo