art of other 2 brokers. The cluster was stabilized at this
> > point.
> > However, we noticed un-repl partitions and Preferred Replica imbalance
> > irregularities.
> >
> > [xxx(user):/xxx/install/1.0.0/bin] ./kafka-topics.sh --describe
> --zookeeper
> > zook
However, we noticed un-repl partitions and Preferred Replica imbalance
> irregularities.
>
> [xxx(user):/xxx/install/1.0.0/bin] ./kafka-topics.sh --describe --zookeeper
> zookeeper1:2181 --under-replicated-partitions
>Topic: ABC Partition: 3Leader: 3
/bin] ./kafka-topics.sh --describe --zookeeper
zookeeper1:2181 --under-replicated-partitions
Topic: ABC Partition: 3Leader: 31 Replicas: 31,21,11
Isr: 31,11
Topic: __consumer_offsets Partition: 1Leader: 31
Replicas: 31,11,21 Isr: 31,11
Topic
On 27/07/2021 09:19, Sridhar Rao wrote:
Hi Everyone,
Recently we noticed a high number of under-replicated-partitions after
zookeeper split brain issue.
We tried fixing the issue by executing ./kafka-reassign-partitions.sh
procedure. However Kafka refuses to re-assign the partitions in ISR
Hi Everyone,
Recently we noticed a high number of under-replicated-partitions after
zookeeper split brain issue.
We tried fixing the issue by executing ./kafka-reassign-partitions.sh
procedure. However Kafka refuses to re-assign the partitions in ISR and
un-repl partitions remain the same.
Kafka
For some reason, I am not able to get the “under-replicated partitions” metric
on my Kafka cluster to zero across all nodes. Even after I manually reassign
all the partitions, one server still has 928 under-replicated partitions. Also,
the number of partitions each server is leading is very
Hi all.
I've got a cluster of 3 brokers with around 50 topics. Several topics are
under replicated. Everything I've seen says I need to restart the followers to
fix that. All my under replicated topics have the same broker as the leader.
That makes me think it's a leader problem and not a
I'm running into this error while writing to the topics:
Caused by: org.apache.kafka.common.errors.NotEnoughReplicasException:
Messages are rejected since there are fewer in-sync replicas than
required.
The topics and the internal `__consumer_offsert` topic have a replication
factor set to 3. Wh
he DNS entries (IIRC) so we had to restart
> the kafka nodes anyway to get the changes (this was with, I believe, 0.8,
> so it's possible this is fixed in 0.10).
>
>
> > > What we've noticed is that, as the brokers are restarted, we get alerts
> > for
> > &g
get the changes (this was with, I believe, 0.8,
so it's possible this is fixed in 0.10).
> > What we've noticed is that, as the brokers are restarted, we get alerts
> for
> > under-replicated partitions, which seems strange since it seems like the
> > shutdown pro
a/browse/ZOOKEEPER-1506>
> What we've noticed is that, as the brokers are restarted, we get alerts for
> under-replicated partitions, which seems strange since it seems like the
> shutdown process should take care of moving any replicas and the leadership
> election process.
t our Kafka
brokers one at a time so they can pick up the new zookeeper IP address.
What we've noticed is that, as the brokers are restarted, we get alerts for
under-replicated partitions, which seems strange since it seems like the
shutdown process should take care of moving any replic
be able to catch
up later. Also to prevent it happening you can tune these two configs a bit.
c. You can use the topic command to list all under-replicated partitions in
real time:
http://kafka.apache.org/documentation.html#basic_ops_add_topic (use
bin/kafka-topics.sh --list)
Guozhang
On M
the issue but i would like to know
a. the factors that can cause under replicated partitions
b. how to fix that issue? is restart an option?
c. apart from M-Bean any other way to get to know under-replicated
partition.?
Regards,
Nitin Kumar Sharma.
Is there a known issue in the 0.8.0 version that was
fixed later on? What can I do to diagnose/fix the situation?
Yes, quite a few bugs related to this have been fixed since 0.8.0. I'd
suggest upgrading to 0.8.1.1
On Wed, Oct 15, 2014 at 11:09 PM, Jean-Pascal Billaud
wrote:
> The only thing tha
The only thing that I find very weird is the fact that brokers that are
dead are still part of the ISR set for hours... and are basically not
removed. Note this is not constantly the case, most of the dead brokers are
properly removed and it is really just in a few cases. I am not sure why
this wou
So I am using 0.8.0. I think I found the issue actually. It turns out that
some partitions only had a single replica and the leaders of those
partitions would basically "refuse" new writes. As soon as I reassigned
replicas to those partitions things kicked off again. Not sure if that's
expected...
Which version of Kafka are you using? The current stable one is 0.8.1.1
On Tue, Oct 14, 2014 at 5:51 PM, Jean-Pascal Billaud
wrote:
> Hey Neha,
>
> so I removed another broker like 30mn ago and since then basically the
> Producer is dying with:
>
> Event queue is full of unsent messages, could n
Hey Neha,
so I removed another broker like 30mn ago and since then basically the
Producer is dying with:
Event queue is full of unsent messages, could not send event:
KeyedMessage(my_topic,[B@1b71b7a6,[B@35fdd1e7)
kafka.common.QueueFullException: Event queue is full of unsent messages,
could not
Regarding (1), I am assuming that it is expected that brokers going down
will be brought back up soon. At which point, they will pick up from the
current leader and get back into the ISR. Am I right?
The broker will be added back to the ISR once it is restarted, but it never
goes out of the replic
hey folks,
I have been testing a kafka cluster of 10 nodes on AWS using version
2.8.0-0.8.0
and see some behavior on failover that I want to make sure I understand.
Initially, I have a topic X with 30 partitions and a replication factor of
3. Looking at the partition 0:
partition: 0 - leader: 5 p
21 matches
Mail list logo