I was having the same issue with kafka 2.11-1.0.0. You need to assure that
the replication factor of the internal topic :  __consumer_offset is
greater than 1.
After setting it to 3 in my case, the consumer group coordinator was able
to rebalance automatically.

Reference :
https://stackoverflow.com/questions/46817599/kafka-group-coordinator-fail-recovery-on-0-11-0-1

On 7 April 2017 at 05:46, Daniel Hinojosa <dh.evolutionn...@gmail.com>
wrote:

> Hey all,
>
> Question. I have three brokers.  I also have 3 consumers on their own
> thread consuming 3 partitions of the same topic "scaled-states".  Here are
> the configs when I run:
>
> kafka-topics.sh --describe --topic 'scaled-cities' --zookeeper zoo2:2181
>
> Topic:scaled-cities PartitionCount:3 ReplicationFactor:3 Configs:
>
> Topic: scaled-cities Partition: 0 Leader: 1 Replicas: 0,1,2 Isr: 1,2,0
>
> Topic: scaled-cities Partition: 1 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
>
> Topic: scaled-cities Partition: 2 Leader: 2 Replicas: 2,0,1 Isr: 1,2,0
>
> My consumer loops uses all three brokers for failover
>
> props.put("bootstrap.servers", "kaf0:9092, kaf1:9092, kaf2:9092");
>
> But when I stop a broker where the consumer coordinator happens to be,
> in this case kaf0, the consumers will stop and no longer consume the
> messages, and the following is displayed
>
> [pool-1-thread-1] INFO
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator -
> Marking the coordinator kaf0:9092 (id: 2147483647 rack: null) dead for
> group consumerGroupAlpha
>
> Anyone know why the consumer coordinator is not on another broker to
> help out.  These consumers will not work and process any other
> messages for this group until I turn that one broker back and this
> doesn't feel like this should be by design.
>
> Thanks and Appreciate the Help
>

Reply via email to