"Rebalancing attempt failed" indicates the rebalancing failed. I added some
notes in the last item in
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped,why
?
Thanks,
Jun
On Fri, Apr 11, 2014 at 11:23 PM, Arjun wrote:
> Even after changing the fetch wait m
>
> A follow-on question: what is the fairness policy when a single stream
> serves multiple topic-partitions? At the chunk level? The reason I ask is
> that I'm trying to manage processing latency across partitions.
The consumer will issue multi-fetch requests across all the brokers
that it cons
Many thanks Joel.
On Mon, Apr 14, 2014 at 11:27 AM, Joel Koshy wrote:
> >
> > A follow-on question: what is the fairness policy when a single stream
> > serves multiple topic-partitions? At the chunk level? The reason I ask is
> > that I'm trying to manage processing latency across partitions.
I've got some consumers under decent GC pressure and, as a result, they are
having ZK sessions expire and the consumers never recover. I see a number
of rebalance failures in the log after the ZK session expiration followed
by silence (and consumed partitions).
My hypothesis is that, since the GC
Correct - heavy client GC leads to numerous problems. There's
two things you can do:
1) Tune the client JVM better to get GC to a more reasonable level
2) Increase the zookeeper session timeout value (this is generally a
work-around for #1, but it can buy you time to dig into it)
--
Dave D
We has high GC in one of the brokers when other one was down.
Please find zookeeper timeout logs.We are going to do more testing as we think
too much logging happening on Kafka side causing high gc(Kafka logs where in
DEBUG mode).
2014-04-11 15:46:02 WARN server.ZooKeeperServer - Connection req
Thanks David. One hypothesis we have is that using different
rebalance.backoff.ms settings for the different ConsumerConnectors on the
same JVM will keep them from synchronizing their rebalance attempts enough
so that one can finish.
On Mon, Apr 14, 2014 at 12:58 PM, David DeMaagd wrote:
> Corre
Deliberate variation of the retry/backoff parameters on a per-client basis
is probably an even more complicated work-around than bumping up the session
timeout. I've never tried it because it doesn't really address the probable
root cause (GC causing client stalls, zookeeper server dropping con
Hello,
I am excited to be trying out Kafka. It sounds like everything I ever wanted in
a messaging system, but didn't have.
I am interested in using the High-Level Consumer without losing (consuming)
messages that were read from the broker, but not processed by user code
(exception thrown, dat
Hello All,
After performing an upgrade of our Kafka 0.8.0 to Kafka 0.8.1, we are receiving
a failure in the preferred replica election process. I am not sure if this is a
known issue or not. This is a two node Kafka cluster in our QA environment
(replica 2) with a total of 2100+ partitions over
Hi Paul,
This will work if you cares about not losing any non-processed messages,
but not about messages-being-processed twice.
Say you have a failure after process(...) and before it.next(), on recovery
the same message will be processed again.
I agree that the cross-thread commit() in the 0.8
Did you make sure there were no under replicated partitions before issuing
the preferred leader election?
Thanks,
Jun
On Mon, Apr 14, 2014 at 4:34 PM, Bello, Bob wrote:
> Hello All,
>
> After performing an upgrade of our Kafka 0.8.0 to Kafka 0.8.1, we are
> receiving a failure in the preferre
In our case it's straight Java.
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Fri, Apr 11, 2014 at 8:58 PM, Michael Campbell <
michael.campb...@gmail.com> wrote:
> Are most of you using straight Java or Scala APIs to tal
Hi.
currently we are using *kafka_2.8.0-0.8.0-beta1 and *and high level
consumer group to consume messages. Topic has been created with 3 replica
and 100 partition so that max 100 threads can consume messages
simultaneously, but i am seeing that mostly threads are in waiting state
and lag is getti
Hi,
Can you please check weather this is the situation
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped,why?
Arjun Kota
On Tuesday 15 April 2014 11:49 AM, ankit tyagi wrote:
Hi.
currently we are using *kafka_2.8.0-0.8.0-beta1 and *and high level
consumer
15 matches
Mail list logo