One alternative method is to check the zookeeper consumer registration
path, if the node is gone then try to restart the consumer after the
sesstion timeout.
Guozhang
On Mon, Dec 30, 2013 at 7:56 PM, Hanish Bansal <
hanish.bansal.agar...@gmail.com> wrote:
> As default zookeeper.session.timeout.
As default zookeeper.session.timeout.ms is 6000 and i look into the details
this value is negotiable. We try to set this value to less than 4000 to
expire the session early but it is negotiated by zookeeper and set to 4000
ms.
We have a backend script running which check in each second that if
co
That's actually easier said than done. The request logs on TRACE are
flooded with log entries. I get about 65MB of file per day. That's on an
idle set of brokers. There are no produce requests and only one connected
consumer that's just sitting there. Is that normal?
Thanks.
Yes, it applies to consumer too.
On Mon, Dec 30, 2013 at 11:46 AM, Yu, Libo wrote:
> Hi Jun,
>
> zookeeper.session.timeout.ms is used in a broker's configuration and
> manages brokers' registration with zk.
> Does it apply to consumer as well? Thanks.
>
> Regards,
>
> Libo
>
>
> -Original M
Hi Jun,
zookeeper.session.timeout.ms is used in a broker's configuration and manages
brokers' registration with zk.
Does it apply to consumer as well? Thanks.
Regards,
Libo
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Monday, December 30, 2013 11:13 AM
To: users@
If the consumer is not shut down properly, it will take
zookeeper.session.timeout.ms before the consumer is deregistered from ZK.
If you restart the consumer before that, rebalances may fail.
Make sure that you call connector.shutdown() when you shut down the consumer
Thanks,
Jun
On Mon, Dec 3
If you have 1000 partitions and 500 consumers, each consumer should be
consuming 2 partitions. You can verify this using ConsumerOffsetChecker.
Which version of Kafka are you using? If it's 0.8, you may want to take a
look at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whydataisnoteve
Yes take a look at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Idon'twantmyconsumer'soffsetstobecommittedautomatically.CanImanuallymanagemyconsumer'soffsets
?
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
ht
Hi All
Can we manage partition-offset commits separately instead of a
consumerConnector.commitOffsets() (commits offsets of all the related
broker partitions simultaneously) call ?
Hi All,
I am getting consumer rebalance failed exception if i restart my consumer
within 1-3 seconds.
Exception trace is:
Caused by: kafka.common.ConsumerRebalanceFailedException:
indexConsumerGroup1_IMPETUS-I0027C-1388416992091-ac0d82d7 can't rebalance
after 4 retries
at
kafka.consumer.Zook
10 matches
Mail list logo