Hi,

Just a long shoot, but I might be wrong. You have
offsets.topic.replication.factor=1 in you config, when one broker is down,
some partitions of __consumer_offsets topic will be down either. So
kafka-consumer-groups can't get offsets from it. Maybe it's just a little
misleading error message.



On Sat, Jan 20, 2024 at 11:38 AM Yavuz Sert <yavuz.s...@netsia.com> wrote:

> Hi, sorry for the confusion, here is details:
>
> I have 3 broker nodes: 192.168.20.223 / 224 / 225
>
> When all kafka services are UP:
>
> [image: image.png]
> I stopped the kafka service on *node 225*:
>
> [image: image.png]
> Then i tried the command on node223 with --bootstrap-server
> 192.168.20.223:9092,192.168.20.224:9092,192.168.20.225:9092:
>
> [image: image.png]
>
>
>
> *Caused by: org.apache.kafka.common.errors.TimeoutException:
> Call(callName=findCoordinator, deadlineMs=1705743236910, tries=47,
> nextAllowedTryMs=1705743237011) timed out at 1705743236911 after 47
> attempt(s)Caused by: org.apache.kafka.common.errors.TimeoutException: Timed
> out waiting for a node assignment. Call: findCoordinator*
>
> even after minutes, i got same error.
>
> Thats my problem.
>
> br,
>
> yavuz
>
>
> On Sat, Jan 20, 2024 at 4:11 AM Haruki Okada <ocadar...@gmail.com> wrote:
>
>> Hi.
>>
>> Which server did you shutdown in testing?
>> If it was 192.168.20.223, that is natural kafka-consumer-groups script
>> fails because you passed only 192.168.20.223 to the bootstrap-server arg.
>>
>> In HA setup, you have to pass multiple brokers (as the comma separated
>> string) to bootstrap-server so that the client can fetch initial metadata
>> from other servers even when one fails.
>>
>> 2024年1月20日(土) 0:30 Yavuz Sert <yavuz.s...@netsia.com>:
>>
>> > Hi all,
>> >
>> > I'm trying to do some tests about high availability on kafka v2.8.2
>> > I have 3 kafka brokers and 3 zookeeper instances.
>> > when i shutdown one of the kafka service only in one server i got this
>> > error:
>> >
>> > [root@node-223 ~]# /root/kafka_2.12-2.8.2/bin/kafka-consumer-groups.sh
>> > --bootstrap-server 192.168.20.223:9092 --group app2 --describe
>> >
>> > Error: Executing consumer group command failed due to
>> > org.apache.kafka.common.errors.TimeoutException:
>> > Call(callName=findCoordinator, deadlineMs=1705677946526, tries=47,
>> > nextAllowedTryMs=1705677946627) timed out at 1705677946527 after 47
>> > attempt(s)
>> > java.util.concurrent.ExecutionException:
>> > org.apache.kafka.common.errors.TimeoutException:
>> > Call(callName=findCoordinator, deadlineMs=1705677946526, tries=47,
>> > nextAllowedTryMs=1705677946627) timed out at 1705677946527 after 47
>> > attempt(s)
>> >         at
>> >
>> >
>> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>> >         at
>> >
>> >
>> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>> >         at
>> >
>> >
>> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>> >         at
>> >
>> >
>> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>> >         at
>> >
>> >
>> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.$anonfun$describeConsumerGroups$1(ConsumerGroupCommand.scala:550)
>> >         at
>> >
>> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
>> >         at scala.collection.Iterator.foreach(Iterator.scala:943)
>> >         at scala.collection.Iterator.foreach$(Iterator.scala:943)
>> >         at
>> scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>> >         at scala.collection.IterableLike.foreach(IterableLike.scala:74)
>> >         at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
>> >         at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
>> >         at
>> scala.collection.TraversableLike.map(TraversableLike.scala:286)
>> >         at
>> scala.collection.TraversableLike.map$(TraversableLike.scala:279)
>> >         at
>> scala.collection.AbstractTraversable.map(Traversable.scala:108)
>> >         at
>> >
>> >
>> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.describeConsumerGroups(ConsumerGroupCommand.scala:549)
>> >       at
>> >
>> >
>> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.collectGroupsOffsets(ConsumerGroupCommand.scala:565)
>> >       at
>> >
>> >
>> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.describeGroups(ConsumerGroupCommand.scala:368)
>> >         at
>> > kafka.admin.ConsumerGroupCommand$.run(ConsumerGroupCommand.scala:73)
>> >         at
>> > kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:60)
>> >         at
>> > kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
>> > Caused by: org.apache.kafka.common.errors.TimeoutException:
>> > Call(callName=findCoordinator, deadlineMs=1705677946526, tries=47,
>> > nextAllowedTryMs=1705677946627) timed out at 1705677946527 after 47
>> > attempt(s)
>> > *Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out
>> > waiting for a node assignment. Call: findCoordinator*
>> >
>> > kafka conf (for 1 server)
>> > broker.id=0
>> > listeners=PLAINTEXT://0.0.0.0:9092
>> > advertised.listeners=PLAINTEXT://192.168.20.223:9092
>> > num.network.threads=3
>> > num.io.threads=8
>> > socket.send.buffer.bytes=102400
>> > socket.receive.buffer.bytes=102400
>> > socket.request.max.bytes=104857600
>> > log.dirs=/root/kafkadir
>> > num.partitions=1
>> > num.recovery.threads.per.data.dir=1
>> > offsets.topic.replication.factor=1
>> > transaction.state.log.replication.factor=1
>> > transaction.state.log.min.isr=1
>> > log.retention.hours=1
>> > log.segment.bytes=104857600
>> > log.retention.check.interval.ms=300000
>> > delete.topic.enable=true
>> > zookeeper.connection.timeout.ms=18000
>> > zookeeper.connect=192.168.20.223:2181,192.168.20.224:2181,
>> > 192.168.20.225:2181
>> > group.initial.rebalance.delay.ms=0
>> > max.request.size=104857600
>> > message.max.bytes=104857600
>> >
>> > How can i fix or troubleshoot the error?
>> >
>> > Thanks
>> >
>> > Yavuz
>> >
>>
>>
>> --
>> ========================
>> Okada Haruki
>> ocadar...@gmail.com
>> ========================
>>
>

Reply via email to