son contains an offline directory .
> >
> >
> > -Original Message-----
> > From: M. Manna
> > Sent: Wednesday, November 13, 2019 5:23 AM
> > To: Kafka Users
> > Subject: Re: Partition Reassignment is getting stuck
> >
> > On Wed, 13 Nov 2019 a
,
> when the reassignment json contains an offline directory .
>
>
> -Original Message-
> From: M. Manna
> Sent: Wednesday, November 13, 2019 5:23 AM
> To: Kafka Users
> Subject: Re: Partition Reassignment is getting stuck
>
> On Wed, 13 Nov 2019 at 13:10, Ashu
ld also be stuck/waiting because of this, when
the reassignment json contains an offline directory .
-Original Message-
From: M. Manna
Sent: Wednesday, November 13, 2019 5:23 AM
To: Kafka Users
Subject: Re: Partition Reassignment is getting stuck
On Wed, 13 Nov 2019 at 13:10, Ashutosh si
On Wed, 13 Nov 2019 at 13:10, Ashutosh singh wrote:
> Yeah, Although it wouldn't have any impact but I will have to try this
> tonight as it is peak business hours now.
> Instead deleting all data I will try to delete topic partitions which are
> having issues and then restart the broker. I bel
Yeah, Although it wouldn't have any impact but I will have to try this
tonight as it is peak business hours now.
Instead deleting all data I will try to delete topic partitions which are
having issues and then restart the broker. I believe it should catch up
but I will let you know.
On Wed, No
On Wed, 13 Nov 2019 at 12:41, Ashutosh singh wrote:
> Hi,
>
> All of a sudden I see under replicated partition in our Kafka cluster and
> it is not getting replicated. It seems it is getting stuck somewhere. In
> sync replica is missing only form one of the broker it seems there is some
> issue
I logged KAFKA-6413 for improving error message
w.r.t. ReassignPartitionsCommand#parsePartitionReassignmentData()
FYI
On Sun, Dec 31, 2017 at 10:24 PM, allen chan
wrote:
> Absolutely user error. Works after i removed the erroneous comma. Wish the
> error message was more obvious.
> Thanks Brett
Absolutely user error. Works after i removed the erroneous comma. Wish the
error message was more obvious.
Thanks Brett and Ted!
On Sun, Dec 31, 2017 at 6:29 PM, Ted Yu wrote:
> I verified that Brett said thru this code:
>
> val (partitionsToBeReassigned, replicaAssignment) =
> ReassignParti
I verified that Brett said thru this code:
val (partitionsToBeReassigned, replicaAssignment) =
ReassignPartitionsCommand.parsePartitionReassignmentData(
"{\"version\":1,\"partitions\":[{\"topic\":\"metrics\",\"partition\"
:0,\"replicas\":[1,2]},{\"topic\":\"metrics\",\"partition\":1,\
That's happening because your JSON is malformed. Losing the last comma will
fix it.
On Sun, Dec 31, 2017 at 3:43 PM, allen chan
wrote:
> Hello
>
> Kafka Version: 0.11.0.1
>
> I am trying to increase replication factor for a topic and i am getting the
> below error. Can anyone help explain what t
the new
>> leader for the partition once it receive LeaderAndIsrRequest from
>> controller to update the new leader information. If these messages keep
>>got
>> logged for long time then there might be an issue.
>> Can you maybe check the timestamp around [2015-04-21
rom: Wes Chow
> Reply-To: "users@kafka.apache.org"
> Date: Tuesday, April 21, 2015 at 1:29 PM
> To: "users@kafka.apache.org"
> Subject: Re: partition reassignment stuck
>
>
> Quick clarification: you say broker 0, but do you actually mean broker 25?
> 25
ply-To: "users@kafka.apache.org<mailto:users@kafka.apache.org>"
mailto:users@kafka.apache.org>>
Date: Tuesday, April 21, 2015 at 1:29 PM
To: "users@kafka.apache.org<mailto:users@kafka.apache.org>"
mailto:users@kafka.apache.org>>
Subject: Re: partition reassi
fka.apache.org>"
mailto:users@kafka.apache.org>>
Subject: Re: partition reassignment stuck
Not for that particular partition, but I am seeing these errors on 28:
kafka.common.NotAssignedReplicaException: Leader 28 failed to record
follower 25's position 0 for partition [clic
org<mailto:users@kafka.apache.org>"
mailto:users@kafka.apache.org>>
Subject: Re: partition reassignment stuck
Not for that particular partition, but I am seeing these errors on 28:
kafka.common.NotAssignedReplicaException: Leader 28 failed to record follower
25's position 0 f
Not for that particular partition, but I am seeing these errors on 28:
kafka.common.NotAssignedReplicaException: Leader 28 failed to record
follower 25's position 0 for partition [click_engage,116] since the
replica 25 is not recognized to be one of the assigned r
eplicas for partition [clic
Those .index files are for different partitions and
they should be generated if new replicas is assigned to the broker.
We might want to know what caused the UnknownException. Did you see any
error log on broker 28?
Jiangjie (Becket) Qin
On 4/21/15, 9:16 AM, "Wes Chow" wrote
Perhaps you can upgrade all brokers and then try?
Thanks,
Jun
On Wed, Jan 21, 2015 at 9:53 PM, Raghu Udiyar wrote:
> No errors in the state-change log or the controller. Its as if the
> controller never got the request for that partition.
>
> Regarding the upgrade, we did upgrade one of the no
No errors in the state-change log or the controller. Its as if the
controller never got the request for that partition.
Regarding the upgrade, we did upgrade one of the nodes, and initiate the
replication. Here, the controller at 0.8.0 and this node at 0.8.1.1. In
this case, when we initiated the
Any error in the controller and state-change log? Also, you may want to
upgrade to 0.8.1, which fixed some reassignment issues.
Thanks,
Jun
On Wed, Jan 21, 2015 at 12:38 PM, Raghu Udiyar wrote:
> Hello,
>
> I have a 6 node kafka cluster (0.8.0) where partition reassignment doesn't
> seem to wo
Topic deletion doesn't quite work in 0.8.1.1. It's fixed in the upcoming
0.8.2 release.
Thanks,
Jun
On Wed, Dec 3, 2014 at 6:17 PM, Andrew Jorgensen <
ajorgen...@twitter.com.invalid> wrote:
> We are currently running 0.8.1.1, I just double checked. One other thing
> that may be related is I bro
We are currently running 0.8.1.1, I just double checked. One other thing that
may be related is I brought up a second kafka cluster today matching the first.
I noticed that if I deleted a topic and the re-created it with the same name
when I re-created the topic none of the leader elections happ
Not sure exactly what happened there. We did fix a few bugs in reassigning
partitions in 0.8.1.1. So, you probably want to upgrade to that one or the
upcoming 0.8.2 release.
Thanks,
Jun
On Tue, Dec 2, 2014 at 2:33 PM, Andrew Jorgensen
wrote:
> I am using kafka 0.8.
> Yes I did run —verify, but
I am using kafka 0.8.
Yes I did run —verify, but got some weird output from it I had never seen
before that looked something like:
Status of partition reassignment:
ERROR: Assigned replicas (5,2) don't match the list of replicas for
reassignment (5) for partition [topic-1,248]
ERROR: Assigned re
Did you run the --verify option (
http://kafka.apache.org/documentation.html#basic_ops_restarting) to check
if the reassignment process completes? Also, what version of Kafka are you
using?
Thanks,
Jun
On Mon, Dec 1, 2014 at 7:16 PM, Andrew Jorgensen <
ajorgen...@twitter.com.invalid> wrote:
> I
Okay, so just to clarify, if I have a partition where the leader is broker
0, the ISR is [0, 1] and I make a partition reassignment with a new AR list
of [1, 0], broker 1 won't take over leadership? I was under the impression
that the "preferred" replica would become the leader, and that that would
>
> Partition reassignment will not move the leader unless the old leader is
>
not part of the new set of replicas.
> Even when it does move the leader, it waits until the new replicas enter
> the ISR.
Okay, so just to clarify, if I have a partition where the leader is broker
0, the ISR is [0, 1]
My current interpretation is that if I start a partition reassignment, for
the sake of simplicity let's assume it's just for a single partition, the
new leader will first become a follower of the current leader, and when it
has caught up it'll transfer leadership over to itself?
Partition reassign
Good day,
We're currently running a Kafka cluster in our staging environment, testing
everything out and so far it runs fairly smoothly.
One thing that we're currently looking into is scaling a cluster that is
already up 'n serving data, and I have some questions regarding moving
partitions betwe
On Thu, Dec 12, 2013 at 9:46 PM, Jun Rao wrote:
> Since we don't support delete topics yet, you would have to wipe out all ZK
> and kafka logs.
>
> Thanks,
>
> Jun
>
>
got it and done.
so it sounds like i should run a number of disparate clusters to spread
risk for topics since a partition is an
Since we don't support delete topics yet, you would have to wipe out all ZK
and kafka logs.
Thanks,
Jun
On Thu, Dec 12, 2013 at 9:32 PM, David Birdsong wrote:
> On Thu, Dec 12, 2013 at 9:28 PM, Jun Rao wrote:
>
> > Could you try starting from scratch again? The recent fix that we had may
> >
On Thu, Dec 12, 2013 at 9:28 PM, Jun Rao wrote:
> Could you try starting from scratch again? The recent fix that we had may
> not be able to recover a cluster already in an inconsistent state.
>
> Thanks,
>
> Jun
>
>
>
How does one start from scratch? Wipe ZK, is there some state file? I have
oth
On Thu, Dec 12, 2013 at 9:28 PM, Guozhang Wang wrote:
> David,
>
> Could you try to see if this is due to
> https://issues.apache.org/jira/browse/KAFKA-1178?
>
> Guozhang
>
Which node do I look for this on? Leader? ISR-candidate? Controller?
>
>
> On Thu, Dec 12, 2013 at 8:45 PM, David Birdson
David,
Could you try to see if this is due to
https://issues.apache.org/jira/browse/KAFKA-1178?
Guozhang
On Thu, Dec 12, 2013 at 8:45 PM, David Birdsong wrote:
> I was running a 2-node kafka cluster off github trunnk at:
> eedbea6526986783257ad0e025c451a8ee3d9095
>
> ...for a few weeks with no
Could you try starting from scratch again? The recent fix that we had may
not be able to recover a cluster already in an inconsistent state.
Thanks,
Jun
On Thu, Dec 12, 2013 at 8:45 PM, David Birdsong wrote:
> I was running a 2-node kafka cluster off github trunnk at:
> eedbea6526986783257ad0e
That's good to know.
Thanks for your help!
2013/12/9 Neha Narkhede
> We will announce it on this mailing list. It is probably a month away from
> a release.
>
> Thanks,
> Neha
>
>
> On Mon, Dec 9, 2013 at 12:02 PM, Maxime Nay wrote:
>
> > Ok, thanks for your help. When 0.8.1 will be production
We will announce it on this mailing list. It is probably a month away from
a release.
Thanks,
Neha
On Mon, Dec 9, 2013 at 12:02 PM, Maxime Nay wrote:
> Ok, thanks for your help. When 0.8.1 will be production ready, will you
> announce it somewhere (will you release it right away) ?
>
> Thanks,
Ok, thanks for your help. When 0.8.1 will be production ready, will you
announce it somewhere (will you release it right away) ?
Thanks,
Maxime
2013/12/9 Neha Narkhede
> I wouldn't call 0.8.1 production ready just yet. We are still in the
> process of deploying it at LinkedIn. Until it is rea
I wouldn't call 0.8.1 production ready just yet. We are still in the
process of deploying it at LinkedIn. Until it is ready, there isn't a good
cluster expansion solution other than spinning up a new cluster. This is
probably a little easier if you have a VIP in front of your kafka cluster.
Thanks
Hi,
We used the code checked in this branch a few hours before the official
0.8.0 final release : https://github.com/apache/kafka/tree/0.8
So hopefully it should be the exact same code as the official release.
The controller logs are empty.
In a previous exchange you advised us to not use trunk
Unfortunately quite a few bugs with reassignment are fixed only in 0.8.1. I
wonder if you can run trunk and see how that goes?
Thanks,
Neha
On Dec 6, 2013 9:46 PM, "Jun Rao" wrote:
> Are you using the 0.8.0 final release? Any error in controller log?
>
> Thanks,
>
> Jun
>
>
> On Fri, Dec 6, 2013
Are you using the 0.8.0 final release? Any error in controller log?
Thanks,
Jun
On Fri, Dec 6, 2013 at 4:38 PM, Maxime Nay wrote:
> Hi,
>
> We are trying to add a broker to a 10 node cluster. We have 7 different
> topics, each of them is divided in 10 partitions, and their replication
> facto
Thanks for advise!
On Wed, Oct 16, 2013 at 7:57 AM, Jun Rao wrote:
> Make sure that there is no under replicated partitions (use the
> --under-replicated option in the list topic command) before you run that
> tool.
>
> Thanks,
>
> Jun
>
>
> On Wed, Oct 16, 2013 at 12:29 AM, Kane Kane wrote:
>
Make sure that there is no under replicated partitions (use the
--under-replicated option in the list topic command) before you run that
tool.
Thanks,
Jun
On Wed, Oct 16, 2013 at 12:29 AM, Kane Kane wrote:
> Yes, thanks, looks like that's what i need, do you know why it tends to
> choose the
There is a ticket for auto-rebalancing, hopefully they'll do auto
redistribution soon
https://issues.apache.org/jira/browse/KAFKA-930
On Wed, Oct 16, 2013 at 12:29 AM, Kane Kane wrote:
> Yes, thanks, looks like that's what i need, do you know why it tends to
> choose the leader for all partitio
Yes, thanks, looks like that's what i need, do you know why it tends to
choose the leader for all partitions on the single broker, despite I have 3?
On Wed, Oct 16, 2013 at 12:19 AM, Joel Koshy wrote:
> Did the reassignment complete? If the assigned replicas are in ISR and
> the preferred repli
Did the reassignment complete? If the assigned replicas are in ISR and
the preferred replicas for the partitions are evenly distributed
across the brokers (which seems to be a case on a cursory glance of
your assignment) you can use this tool:
https://cwiki.apache.org/confluence/display/KAFKA/Repli
Oh i see, what is the better way to initiate the leader change? As I told
somehow all my partitions have the same leader for some reason. I have 3
brokers and all partitions have leader on single one.
On Wed, Oct 16, 2013 at 12:04 AM, Joel Koshy wrote:
> For a leader change yes, but this is par
For a leader change yes, but this is partition reassignment which
completes when all the reassigned replicas are in sync with the
original replica(s). You can check the status of the command using the
option I mentioned earlier.
On Tue, Oct 15, 2013 at 7:02 PM, Kane Kane wrote:
> I thought if i h
I thought if i have all replicas in sync, leader change should be much
faster?
On Tue, Oct 15, 2013 at 5:12 PM, Joel Koshy wrote:
> Depending on how much data there is in those partitions it can take a
> while for reassignment to actually complete. You will need to use the
> --status-check-json
Depending on how much data there is in those partitions it can take a
while for reassignment to actually complete. You will need to use the
--status-check-json-file option of the reassign partitions command to
determine whether partition reassignment has completed or not.
Joel
On Tue, Oct 15, 20
51 matches
Mail list logo