l#comment-13729537
____
From: Hargett, Phil
Sent: Friday, August 02, 2013 1:36 PM
To: Jun Rao
Cc: users@kafka.apache.org
Subject: RE: Fatal issue (was RE: 0.8 throwing exception "Failed to find
leader" and high-level consumer fails to make progress_
I
.com]
Sent: Wednesday, July 31, 2013 12:16 AM
To: Hargett, Phil
Cc: users@kafka.apache.org
Subject: Re: Fatal issue (was RE: 0.8 throwing exception "Failed to find
leader" and high-level consumer fails to make progress_
Hmm, that's a good theory. My understanding is that you have one thre
On Jul 30, 2013, at 12:01 PM, "Jun Rao"
mailto:jun...@gmail.com>> wrote:
What's the revision of the 0.8 branch that you used? If that's older than the
beta1 release, I recommend that you upgrade.
Thanks,
Jun
On Tue, Jul 30, 2013 at 3:09 AM, Hargett, Phil
mailto:phil.
ul 30, 2013, at 12:01 PM, "Jun Rao"
mailto:jun...@gmail.com>> wrote:
What's the revision of the 0.8 branch that you used? If that's older than the
beta1 release, I recommend that you upgrade.
Thanks,
Jun
On Tue, Jul 30, 2013 at 3:09 AM, Hargett, Phil
mailto:phil.harg.
ne? It may
not be related, but we did fix some consumer side deadlock issues there.
Thanks,
Jun
On Mon, Jul 29, 2013 at 9:02 AM, Hargett, Phil
mailto:phil.harg...@mirror-image.com>> wrote:
I think we have 3 different classes in play here:
* kafka.consumer.ZookeeperConsumerConnector
ill see Zookeeper
session expirations in the consumer log (grep for Expired). Occasional
rebalances are fine. Too many rebalances can slow down the consumption and one
will need to tune the java GC setting.
Thanks,
Jun
On Sat, Jul 27, 2013 at 9:33 AM, Hargett, Phil
mailto:phil.harg...@mir
25, 2013 at 10:07 AM, Hargett, Phil <
phil.harg...@mirror-image.com> wrote:
> Possibly.
>
> I see evidence that its being stopped / started every 30 seconds in same
> cases (due to my code). It's entirely possible that I have a race, too, in
> that 2 separate pieces o
ct: Re: 0.8 throwing exception "Failed to find leader" and high-level
consumer fails to make progress
The exception is likely due to a race condition btw the logic in ZK watcher
and the closing of ZK connection. It's harmless, except for the weird
exception.
Thanks,
Jun
On Tue, Jun
2013, at 12:45 PM, "Jun Rao" wrote:
> This typically only happens when the consumerConnector is shut down. Are
> you restarting the consumerConnector often?
>
> Thanks,
>
> Jun
>
>
> On Tue, Jun 25, 2013 at 9:40 AM, Hargett, Phil <
> phil.harg...@mirr
Seeing this exception a LOT (3-4 times per second, same log topic).
I'm using external code to feed data to about 50 different log topics over a
cluster of 3 Kafka 0.8 brokers. There are 3 ZooKeeper instances as well, all
of this is running on EC2. My application creates a high-level consumer
For one of our key Kafka-based applications, we ensure that all messages in the
stream have a common binary format, which includes (among other things) a
version identifier and a schema identifier. The version refers to the format
itself, and the schema refers to the "payload," which s the data
his turns out to be a bug. It seems that we always call commitOffsets() in
ZookeeperConsumerConnector.closeFetchersForQueues() (which is called during
rebalance), whether auto commit is enabled or not. Could you file a jira?
Thanks,
Jun
On Fri, May 24, 2013 at 8:06 AM, Hargett, Phil <
In one of our applications using Kafka, we are using the high-level consumer to
pull messages from our topic.
Because we pull messages from topics in discrete units (e.g., an hour's worth
of messages), we want to control explicitly when offsets are committed.
Even though "auto.commit.enable" is
Re "So, how did you get the data from the local broker out without ZK"...
We didn't use Mirror Maker itself. We wrote a simple application inspired by
Mirror Maker but written in Java that understands our topology and used
external information to locate source brokers from which to consume data
While the replication features in 0.8 are very desirable for us, one aspect of
0.7 that was also appealing was that in specific scenarios a single broker
instance could run by itself without an accompanying Zookeeper.
This provided a lightweight "entry point" for log flows by running lots of
15 matches
Mail list logo