Kafka 3.6.0.
I have a KRaft cluster with three quorum servers. A power failure killed
all the controllers at the same time. After rebooting, the controllers
can not connect to each other. So, the cluster is down.
Log:
"""
[...]
[2023-12-01 20:29:24,931] INFO [MetadataLoader id=1000]
initial
On 1/12/23 20:42, Jesus Cea wrote:
I use SASL_SSL. The controller credentials are "wired" in the
configuration, so no "metadata recovery watermark" knowledge should be
necessary:
"""
listener.name.controller.sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256
listener.name.controller.plain.sasl.jaas.c
Hi.
I'm not sure if KafkaManager has such bug though, you should check if
there's any under replicated partitions actually by `kafka-topics.sh`
command with `--under-replicated-partitions` option first.
2023年11月30日(木) 23:41 Lud Antonie :
> Hello,
>
> After upgrading from 2.7.2 to 3.5.1 some topi
Hi.
`max.poll.records` does nothing with fetch requests (refs:
https://kafka.apache.org/35/documentation.html#consumerconfigs_max.poll.records
)
Then, how many records will be returned for single fetch request depends on
the partition-leader assignment. (note: we assume follower-fetch is not
used
Hi Lud,
The topics for which you're seeing under replicated partitions, Did you try
to increase the number of partitions anytime after creation of those topics
before the upgrade?
We have earlier faced issues with 2.8.0, in which we had increased the
number of partitions for some topics, and for