Hi there,
Wondering if anyone ran into a unique situation where zookeeper seems
to have the topic metadata, but broker doesn't have the corresponding
log file
Below is what we noticed in zookeeper:
--
kafka@kafka-3:~$ /opt/kafka/kaf
System configuration:
OS: Ubuntu 14.04LTS
java version "1.7.0_80", Java(TM) SE Runtime Environment (build
1.7.0_80-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11,
mixed mode)
kafka version: kafka_2.10-0.8.1.1
No. of brokers: 4
All,
We're experiencing a weird tcp connection leakage situ
Hi Stefan,
Have you looked at the following output for message distribution
across the topic-partitions and which topic-partition is consumed by
which consumer thread?
kafaka-server/bin>./kafka-run-class.sh
kafka.tools.ConsumerOffsetChecker --zkconnect localhost:2181 --group
Jagbir
On Wed, Jul
System: jdk-7u76 (Oracle), on Ubuntu 14.04.1 LTS, Trusty Tahr, Kernel:
3.13.0-44-generic
Kafka Version: 0.8.1.2 Broker Cfg: Three brokers, Zookeeper Cfg: Three
brokers, Replication Factor: 2
Client Machine Configuration::
- Memory: 4G
- CPU: 2 (Xeon @ 2.50GHz)
- Java args:: -server -Xmx2G -Xms2G
ssage rate - if you look at those over a period of
> time you can figure out which of those are likely to be defunct and
> then delete those topics.
>
> On Thu, Feb 05, 2015 at 02:38:27PM -0800, Jagbir Hooda wrote:
>> First I would like to take this opportunity to thank this group f
First I would like to take this opportunity to thank this group for
releasing 0.8.2.0. It's a major milestone with a rich set of features.
Kudos to all the contributors! We are still running 0.8.1.2 and are
planning to upgrade to 0.8.2.0. While planning this upgrade we
discovered many topics that a
utdownableThread.run(ShutdownableThread.scala:51)
-8<
On Wed, Sep 24, 2014 at 9:39 PM, Jun Rao wrote:
> You can enable some trace/debug level logging to see if the thread is
> indeed hanging in BoundedByteBufferReceive.
>
> Thanks,
>
> Jun
>
> On We
> using SimpleConsumer directly? It seems it's started by the high level
> consumer through the FetchFetcher thread.
>
> Thanks,
>
> Jun
>
> On Mon, Sep 22, 2014 at 11:41 AM, Jagbir Hooda wrote:
>
>> Note: Re-posting the older message from another account due
Note: Re-posting the older message from another account due to
formatting issues.
Folks,
Recently in one of our SimpleConsumer based client applications (0.8.1.1),
we spotted a very busy CPU with almost no traffic in/out from the client
and Kafka broker (1broker+1zookeeper) (the stack trace is
I'm sorry about the formatting issues below:-(I need to stop using hotmail as
the hotmail is mangling the message formatting:-(I'll try re-posting from my
gmail address.
Jagbir
> From: jsho...@hotmail.com
> To: users@kafka.apache.org
> Subject: Busy CPU while negotiating contentBuffer size at
>
Folks,
Recently in one of our SimpleConsumer based client applications (0.8.1.1),we
spotted a very busy CPU with almost no traffic in/out from the clientand Kafka
broker (1broker+1zookeeper) (the stack trace is attached at the end).
The busy thread was invoked in a while loop anchored at the read
; Could you file a jira and put the link there?
>
> Thanks,
>
> Jun
>
>
> On Tue, Aug 12, 2014 at 11:14 PM, Jagbir Hooda wrote:
>
> > > Date: Tue, 12 Aug 2014 16:35:35 -0700
> > > Subject: Re: Blocking Recursive parsing from
> > kafka
> Date: Tue, 12 Aug 2014 16:35:35 -0700
> Subject: Re: Blocking Recursive parsing from
> kafka.consumer.TopicCount$.constructTopicCount
> From: wangg...@gmail.com
> To: users@kafka.apache.org
>
> Hi Jagbir,
>
> The thread dump you uploaded is not readable, could you re-parse it and
> upload agai
Hi All,
We have a typical cluster of 3 kafka instances backed by 3 zookeeper instances
(kafka version 0.8.1.1, scala version 2.10.3, java version 1.7.0_65). On
consumer end, when some of our consumers were getting recycled, we found a
troubling recursion which was taking a busy lock and blocking
I think duplicate message is the right behavior for both patterns
iter.next(); process(message) ; CRASH; consumer.commit();
iter.peek();process(message) ; CRASH; iter.next(); CRASH; consumer.commit();
The only diff is fewer lines of code for the first pattern.
Jagbir
> Date: Mon, 23 Jun 2014 13:4
> Date: Thu, 16 Jan 2014 18:47:18 -0800
> Subject: Re: Question about missing broker data in zookeeper
> From: wangg...@gmail.com
> To: users@kafka.apache.org
>
> To get the broker registration data you need
>
> get /brokers/ids/1 (ls /brokers/ids/1 will only retr
Hi,
I've a setup of three kafka servers (kafka_2.8.0-0.8.0) and three zookeeper
servers (zookeeper1, zookeeper2, zookeeper3).
Everything works OK, but when I did a consumer test using nodejs package
node-kafka it failed to retrieve any messages. When I looked more closely I
found something int
Hi Arthur,
I'm running into a very similar issue even with the latest version (
kafka-python @ V. 0.8.1_1 used with kafka_2.8.0-0.8.0.tar.gz). I have created a
topic 'my-topic' with two partitions and 1-replication (across a set of 3 kafka
brokers). I've published 100 messages to the topic (see
Hi Arthur,
I'm running into a very similar issue even with the latest version (
kafka-python @ V. 0.8.1_1 used with kafka_2.8.0-0.8.0.tar.gz). I have created a
topic 'my-topic' with two partitions and 1-replication (across a set of 3 kafka
brokers). I've published 100 messages to the topic (see
19 matches
Mail list logo