Hi
I am still seeing this issue while trying to rebalance. The consumer is
committing offsets and clearing the fetcher queues while trying to rebalance. I
don’t want this to happen. I am using kafka 0.8.2. Pleas help me here
Thanks
Gayathri
Hey just reporting that the ZK disconnect tip on the FAQ was in fact right on
the money. After tweaking our GC settings and zk timeout settings, I'm no
longer seeing the flood of rebalances.
--
Ian Friedman
On Tuesday, August 20, 2013 at 2:26 AM, Ian Friedman wrote:
> Sorry, ignore that f
Sorry, ignore that first exception, I believe that was caused by an actual
manual shutdown. The NoNode exception though, has been popping up a lot, and I
am not sure if it's relevant, but it seems to show up a bunch when the
consumers decide it's time to rebalance continuously.
--
Ian Fried
That's not it either. I just had all my consumers shut down on me with this:
INFO 21:51:13,948 () ZkUtils$ - conflict in
/consumers/flurry1/owners/dataLogPaths/1-183 data:
flurry1_hs1030-1376964634130-dcc9192a-0 stored data:
flurry1_hs1061-1376964609207-4b7f348b-0
INFO 21:51:13,948 () Zooke
Any failure/restart of a consumer or a broker can also trigger a rebalance.
Thanks,
Jun
On Mon, Aug 19, 2013 at 6:00 PM, Ian Friedman wrote:
> Jun, I read that FAQ entry you linked, but I am not seeing any Zookeeper
> connection loss in the logs. It's rebalancing multiple times per minute,
>
Jun, I read that FAQ entry you linked, but I am not seeing any Zookeeper
connection loss in the logs. It's rebalancing multiple times per minute,
though. Any idea what else could cause this? We're running kafka 0.7.2 on
approx 400 consumers against a topic with 400 partitions * 3 brokers.
--
Ok Jun, thanks very much. I'm working on building that now and will come back
with a patch once I have it running in our production environment.
--
Ian Friedman
On Thursday, August 15, 2013 at 10:53 AM, Jun Rao wrote:
> We are only patching blocker issues in 0.7. 0.8 beta1 has been released
Yes, during rebalances, messages could be re-delievered since the new owner
of a partition starts fetching from the last checkpointed offset in ZK.
For reasons on why rebalances happen a lot, see
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog%3F
It's a simple enough patch, but wouldn't this mean that messages still in
process when a rebalance happens could get delivered to another consumer if we
end up losing the partition? Rebalances seem to happen very frequently with a
lot of consumers for some reason… And it doesn't seem like a cons
We are only patching blocker issues in 0.7. 0.8 beta1 has been released and
most dev effort will be on 0.8 and beyond. That said. This particular case
is easy to fix. If you can port the patch in
https://issues.apache.org/jira/browse/KAFKA-919 o the 0.7 branch , we can
commit that to the 0.7 branch
Ugh.
Is there any way to make this work in 0.7, or is transitioning to 0.8 the only
way? My operations engineers spent a lot of effort in configuring and hardening
our 0.7 production install, and 0.8 isn't released yet. Not to mention having
to integrate the new client side code.
Either way,
Yes, this is an issue and has been fixed in 0.8.
Thanks,
Jun
On Wed, Aug 14, 2013 at 5:21 PM, Ian Friedman wrote:
> Hey guys,
>
> I designed my consumer app (running on 0.7) to run with autocommit off and
> commit manually once it was done processing a record. The intent was so
> that if a co
Hi Ian,
closeFetchersForQueues in ZookeeperConsumerConnector.scala will only be
called during rebalance, so if your consumer died in the middle of
processing a message that function should not be called hence it would not
participate in the rebalance.
Guozhang
On Wed, Aug 14, 2013 at 5:21 PM, Ia
Hey guys,
I designed my consumer app (running on 0.7) to run with autocommit off and
commit manually once it was done processing a record. The intent was so that if
a consumer died while processing a message, the offset would not be committed,
and another box would pick up the partition and re
14 matches
Mail list logo