We were ultimately able to solve this issue - mainly by sitting and waiting
The issue was indeed that somewhen, somehow, the data on the leader of
this __consumer_offset-18 partition got corrupted. This probably
happened during the upgrade from Kafka 2.2 -> 2.6. We were doing this in
a rather dang
We did an upgrade from Kafka 2.2 to 2.6, followed by a migration
(through reassign-partitions) from old to new brokers.
As described in
https://stackoverflow.com/questions/64514851/apache-kafka-kafka-common-offsetsoutoforderexception-when-reassigning-consume,
all but 1 partition (__consumer_offset
. In
this case, the same message may also be processed in parallel.
-Original Message-
From: Yingshuan Song
Sent: Thursday, August 13, 2020 23:42
To: users@kafka.apache.org
Subject: Re: will partition reassignment cause same message being processed in
parallel?
Yes,it is possible.
Think
;
> Partition may be reassigned if the consumer is not reachable. Will
> partition reassignment cause same message being processed in parallel?
> Suppose if Kafka found consumer A is not reachable (maybe because of
> network problem), it assigns the partition to consumer B. But actually
Hi,
Partition may be reassigned if the consumer is not reachable. Will partition
reassignment cause same message being processed in parallel?
Suppose if Kafka found consumer A is not reachable (maybe because of network
problem), it assigns the partition to consumer B. But actually consumer A is
ineLogDirectoryCount" or use kafka-log-dirs.sh
> > to understand the issue with that directory. In most cases, it would be
> the
> > a KafkaStorage Exception.
> > The partition reassignment would also be stuck/waiting because of this,
> > when the reassignment j
be on a offline directory.
> Look into the metric "offlineLogDirectoryCount" or use kafka-log-dirs.sh
> to understand the issue with that directory. In most cases, it would be the
> a KafkaStorage Exception.
> The partition reassignment would also be stuck/waiting because of this
The topic partition having the ISR issue might be on a offline directory. Look
into the metric "offlineLogDirectoryCount" or use kafka-log-dirs.sh to
understand the issue with that directory. In most cases, it would be the a
KafkaStorage Exception.
The partition reassignment wou
On Wed, 13 Nov 2019 at 13:10, Ashutosh singh wrote:
> Yeah, Although it wouldn't have any impact but I will have to try this
> tonight as it is peak business hours now.
> Instead deleting all data I will try to delete topic partitions which are
> having issues and then restart the broker. I bel
Yeah, Although it wouldn't have any impact but I will have to try this
tonight as it is peak business hours now.
Instead deleting all data I will try to delete topic partitions which are
having issues and then restart the broker. I believe it should catch up
but I will let you know.
On Wed, No
On Wed, 13 Nov 2019 at 12:41, Ashutosh singh wrote:
> Hi,
>
> All of a sudden I see under replicated partition in our Kafka cluster and
> it is not getting replicated. It seems it is getting stuck somewhere. In
> sync replica is missing only form one of the broker it seems there is some
> issue
Hi,
All of a sudden I see under replicated partition in our Kafka cluster and
it is not getting replicated. It seems it is getting stuck somewhere. In
sync replica is missing only form one of the broker it seems there is some
issue with that broker but other hand there are many others topic on t
Have you seen this thread ?
http://search-hadoop.com/m/Kafka/uyzND1pHiNuYt8hc1?subj=Re+Question+Kafka+Reassign+partitions+tool
On Thu, Feb 8, 2018 at 4:12 PM, Dylan Martin
wrote:
> Hi all.
>
>
> I'm trying to cancel a failed partition reassignment. I've heard that
Hi all.
I'm trying to cancel a failed partition reassignment. I've heard that this can
be done by deleting /admin/reassign_partitions in zookeeper. I've tried and
/admin/reassign_partitions won't go away.
Does anyone know a way to cancel a partition reassignment?
-Dy
xplain what the error means? The json
> is
> > > not
> > > > empty
> > > >
> > > > $ cat increase-replication-factor.json
> > > > {"version":1,
> > > > "partitions":[
> > > > {"topic&quo
> $ cat increase-replication-factor.json
> > > {"version":1,
> > > "partitions":[
> > > {"topic":"metrics","partition":0,"replicas":[1,2]},
> > > {"topic":"metrics&
[
> > {"topic":"metrics","partition":0,"replicas":[1,2]},
> > {"topic":"metrics","partition":1,"replicas":[2,3]},
> > ]}
> >
> > $ sudo /opt/kafka/kafka_2
ot;topic":"metrics","partition":1,"replicas":[2,3]},
> ]}
>
> $ sudo /opt/kafka/kafka_2.12-0.11.0.1/bin/kafka-reassign-partitions.sh
> --zookeeper server1:2181 --reassignment-json-file
> increase-replication-factor.json --execute
> Partitions r
json-file
increase-replication-factor.json --execute
Partitions reassignment failed due to Partition reassignment data file is
empty
kafka.common.AdminCommandFailedException: Partition reassignment data file
is empty
at
kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:
Hello!
We tried to migrate data from 0.10.2.1 cluster to 0.11.0.2. Firstly we
spread topics to both clusters. There were lots of problems and restarts
of some nodes of both clusters (we probably shouldn't do that). All this
ended up with a state when we had lots of exceptions from 2 nodes of
Hi,
I'm using kafka-reassign-partitions.sh to move partitions around, however,
sometimes I got partition reassignment failure. The cluster is healthy
before the rebalance, and a retry after 10 mins resolved the problem.
However, I wonder if there's a way I can check why the reassignment
Hello all,
I have a design for a solution to the problem of "partition imbalances in
Kafka clusters".
It would be great to get some feedback on it.
https://soumyajitsahu.wordpress.com/2016/05/11/kafka-partition-reassignment-service-using-an-adoption-marketplace-model/
I have also put
Hi team,
What is the best way to cancel an in progress partition reassignment job? I
know it saves the json in /admin/reassign_partition in zk. Is it ok to
delete the znode?
We recently performed a partition reassignment.
Our Kafka cluster is on 0.8.1.1, and all topics are configured to have 2
replicas.
After the completion of reassignment, we have noticed that some of the
partitions has 3 replicas and 3 Isrs.
for example:
Topic: topic1Partition: 7Leader
Hi All,
We recently performed a partition reassignment on one of our Kafka cluster.
(0.8.1.1, all topics configured to have 2 replicas.)
After the completion of reassignment, we have noticed that some of the
partitions has 3 replicas and 3 Isrs instead of 2.
for example:
Topic: topic1
Thank you Todd for the detailed answer
On Wed, Aug 5, 2015 at 9:09 PM Todd Palino wrote:
> To make sure you have a complete answer here, the order of the replica list
> that you specify in the partition reassignment will affect the leader
> selection, but if the current leader is i
To make sure you have a complete answer here, the order of the replica list
that you specify in the partition reassignment will affect the leader
selection, but if the current leader is in the new replica list, it will
not change the leadership to change.
That is, if your current replica list is
o wrote:
>
> > Hi team,
> >
> > Is it possible to specify a leader broker for each topic partition when
> > doing partition reassignment?
> >
> > For example I have following json. Is the first broker in the replicas
> list
> > by default the leader of the
ach topic partition when
> doing partition reassignment?
>
> For example I have following json. Is the first broker in the replicas list
> by default the leader of the partition e.g. broker 3 is the leader of topic
> test5 and broker 2 is the leader of topic test3. or does Kafka
> au
Hi team,
Is it possible to specify a leader broker for each topic partition when
doing partition reassignment?
For example I have following json. Is the first broker in the replicas list
by default the leader of the partition e.g. broker 3 is the leader of topic
test5 and broker 2 is the leader
nikumar
>
>
> On Tue, Jun 16, 2015 at 8:15 AM, Yu Yang wrote:
>
> > HI,
> >
> > We have a kafka 0.8.1.1 cluster. Recently I did a partition assignment
> for
> > some topic partitions in the cluster. Due to broker failure, the
> partition
> > reassignme
y I did a partition assignment for
> some topic partitions in the cluster. Due to broker failure, the partition
> reassignment failed. I cannot do another partition assignment now, and
> always get errors as follows. How can we work around this? I have tried
> google for answers, but
HI,
We have a kafka 0.8.1.1 cluster. Recently I did a partition assignment for
some topic partitions in the cluster. Due to broker failure, the partition
reassignment failed. I cannot do another partition assignment now, and
always get errors as follows. How can we work around this? I have tried
ou!
> >>>>>>> Chris
> >>>>>>>
> >>>>>>>> On Sun, May 17, 2015 at 12:58 PM, Clark Haskins >
> >>>>>> wrote:
> >>>>>>>>
> >>>>>>>> Do a get /admin/reassign_partitions
> &
ay 17, 2015, at 10:20 AM, Chris Neal wrote:
>>>>>>>>>
>>>>>>>>> Sure thing :)
>>>>>>>>> Hopefully I did this right. Somewhat of a Zookeeper noob.
>>>>>>>>>
>>>>>>>&
com <http://myhost.mydomain.com:2181/
> >:2181
> >>>> ls
> >>>>>>> /admin/reassign_partitions
> >>>>>>> Connecting to myhost.mydomain.com <
> http://myhost.mydomain.com:2181/
> >>>>>>> :2181
>
>>>>>>> :2181
>>>>>>>
>>>>>>> WATCHER::
>>>>>>>
>>>>>>> WatchedEvent state:SyncConnected type:None path:null
>>>>>>> []
>>>>>>>
>>>>&g
te:SyncConnected type:None path:null
>>> >>> []
>>> >>>
>>> >>> Hope that is helpful :)
>>> >>> If this is not what you were asking for, please just let me know.
>>> >>> Thank you!
>>> >>> Chris
&g
_partitions znode is the important one. Please paste the
>> >>>> contents of it. That node should only exist while there is a
>> >> reassignment
>> >>>> in progress.
>> >>>>
>> >>>> You can probably fi
d only exist while there is a
> >> reassignment
> >>>> in progress.
> >>>>
> >>>> You can probably fix this up by forcing a new controller to come
> online
> >> by
> >>>> deleting /controller
> >>>>
> >>>
;>>> Sent from my iPhone
>>>>
>>>>> On May 17, 2015, at 10:14 AM, Chris Neal wrote:
>>>>>
>>>>> Hi Clark,
>>>>>
>>>>> Thank you for your reply! I do see that znode under /admin:
>>>>>
&g
;
> ./opt/cloudera/parcels/CDH-5.1.3-1.cdh5.1.3.p0.12/lib/zookeeper/bin/zkCli.sh
> >>> -server myhost.mydomain.com:2181 ls /admin
> >>> Connecting to myhost.mydomain.com:2181
> >>>
> >>> WATCHER::
> >>>
> >>> WatchedEvent stat
eper/bin/zkCli.sh
>>> -server myhost.mydomain.com:2181 ls /admin
>>> Connecting to myhost.mydomain.com:2181
>>>
>>> WATCHER::
>>>
>>> WatchedEvent state:SyncConnected type:None path:null
>>> [reassign_partitions, delete_topics]
&g
ure what this tells me though :)
> > Again, thanks for your time.
> > Chris
> >
> >> On Sun, May 17, 2015 at 12:20 AM, Clark Haskins
> wrote:
> >>
> >> Does the partition reassignment znode exist under /admin in zookeeper?
> >>
> >> -Clark
cting to myhost.mydomain.com:2181
>
> WATCHER::
>
> WatchedEvent state:SyncConnected type:None path:null
> [reassign_partitions, delete_topics]
>
> I'm not sure what this tells me though :)
> Again, thanks for your time.
> Chris
>
>> On Sun, May 17, 2015
one path:null
[reassign_partitions, delete_topics]
I'm not sure what this tells me though :)
Again, thanks for your time.
Chris
On Sun, May 17, 2015 at 12:20 AM, Clark Haskins wrote:
> Does the partition reassignment znode exist under /admin in zookeeper?
>
> -Clark
>
> Sent from
Does the partition reassignment znode exist under /admin in zookeeper?
-Clark
Sent from my iPhone
> On May 16, 2015, at 7:16 PM, Chris Neal wrote:
>
> Sorry for bumping my own thread. :S Just wanted to get it in front of some
> eyes again!
>
> Thanks for your time and help
, I get this:
>
> Partitions reassignment failed due to Partition reassignment currently in
> progress for Map(). Aborting operation
> kafka.common.AdminCommandFailedException: Partition reassignment currently
> in progress for Map(). Aborting operation
> at
> kafka.admin.ReassignPar
Hi All,
I am running: kafka_2.10-0.8.1.1, and when I run the
reassign-partitions.sh script, I get this:
Partitions reassignment failed due to Partition reassignment currently in
progress for Map(). Aborting operation
kafka.common.AdminCommandFailedException: Partition reassignment currently
in
eds to be resolved, you might need to bounce both of the brokers
>> who think itself as controller respectively. The new controller should
>>be
>> able to continue the partition reassignment.
>>
>> From: Wes Chow
>> Reply-To: "users@kafka.apache.org
angjie Qin" wrote:
> Yes, should be broker 25 thread 0 from the log.
> This needs to be resolved, you might need to bounce both of the brokers
> who think itself as controller respectively. The new controller should be
> able to continue the partition reassignment.
>
> F
Yes, should be broker 25 thread 0 from the log.
This needs to be resolved, you might need to bounce both of the brokers who
think itself as controller respectively. The new controller should be able to
continue the partition reassignment.
From: Wes Chow mailto:w...@chartbeat.com>>
Re
fka.apache.org>"
mailto:users@kafka.apache.org>>
Subject: Re: partition reassignment stuck
Not for that particular partition, but I am seeing these errors on 28:
kafka.common.NotAssignedReplicaException: Leader 28 failed to record
follower 25's position 0 for partition [clic
org<mailto:users@kafka.apache.org>"
mailto:users@kafka.apache.org>>
Subject: Re: partition reassignment stuck
Not for that particular partition, but I am seeing these errors on 28:
kafka.common.NotAssignedReplicaException: Leader 28 failed to record follower
25's position 0 f
ated if new replicas is assigned to the broker.
We might want to know what caused the UnknownException. Did you see any
error log on broker 28?
Jiangjie (Becket) Qin
Wes Chow <mailto:w...@chartbeat.com>
April 21, 2015 at 12:16 PM
I started a partition reassignment (this is a 8.1.1 cluste
Chow" wrote:
>I started a partition reassignment (this is a 8.1.1 cluster) some time
>ago and it seems to be stuck. Partitions are no longer getting moved
>around, but it seems like the cluster is operational otherwise. The
>stuck nodes have a lot of .index files
I started a partition reassignment (this is a 8.1.1 cluster) some time
ago and it seems to be stuck. Partitions are no longer getting moved
around, but it seems like the cluster is operational otherwise. The
stuck nodes have a lot of .index files, and their
logs show errors
I'm in the process of reassigning partitions away from failing machines
and it appears to be stuck. One thought is because our machines are
failing at a very high rate and so some partitions no longer have any
live replicas at all. At this point I don't care about the data, I just
want to get
> Jun
> >
> > On Wed, Jan 21, 2015 at 12:38 PM, Raghu Udiyar
> > wrote:
> >
> > > Hello,
> > >
> > > I have a 6 node kafka cluster (0.8.0) where partition reassignment
> > doesn't
> > > seem to work on a few partiti
fixed some reassignment issues.
>
> Thanks,
>
> Jun
>
> On Wed, Jan 21, 2015 at 12:38 PM, Raghu Udiyar
> wrote:
>
> > Hello,
> >
> > I have a 6 node kafka cluster (0.8.0) where partition reassignment
> doesn't
> > seem to work on a few partitions.
Any error in the controller and state-change log? Also, you may want to
upgrade to 0.8.1, which fixed some reassignment issues.
Thanks,
Jun
On Wed, Jan 21, 2015 at 12:38 PM, Raghu Udiyar wrote:
> Hello,
>
> I have a 6 node kafka cluster (0.8.0) where partition reassignment doesn
Hello,
I have a 6 node kafka cluster (0.8.0) where partition reassignment doesn't
seem to work on a few partitions. This happens within the same, as well as
across other topics. Following is the behavior observed :
1. For a successful reassignment, the kafka-reassign-partitions.sh re
The reassignment tool outputs the original assignment before executing the
next one. If you have that saved, you can initiate another assignment to
make it go to its initial state. That is probably a safer way to fix the
reassignment.
On Wed, Dec 17, 2014 at 3:12 PM, Salman Ahmed
wrote:
>
> I had
I had an issue where one kafka node was filling up on disk space. I used
the reassignment script in an incorrect way, overloading large number of
topics/partition on two target machines, which caused kafka to stop on
those machines.
I would like to cancel the reassignment process, and restore it t
drew Jorgensen
> wrote:
>
>> I am using kafka 0.8.
>> Yes I did run —verify, but got some weird output from it I had never
>> seen before that looked something like:
>>
>> Status of partition reassignment:
>> ERROR: Assigned replicas (5,2) don't
from it I had never seen
before that looked something like:
Status of partition reassignment:
ERROR: Assigned replicas (5,2) don't match the list of replicas for
reassignment (5) for partition [topic-1,248]
ERROR: Assigned replicas (7,3) don't match the list of replicas for
reassignme
>
> On Tue, Dec 2, 2014 at 11:20 PM, Jun Rao wrote:
>
> > Is there an easy way to reproduce the issues that you saw?
> >
> > Thanks,
> >
> > Jun
> >
> > On Mon, Dec 1, 2014 at 6:31 AM, Karol Nowak wrote:
> >
> > > Hi,
> > >
verify, but got some weird output from it I had never seen
> before that looked something like:
>
> Status of partition reassignment:
> ERROR: Assigned replicas (5,2) don't match the list of replicas for
> reassignment (5) for partition [topic-1,248]
> ERROR: Assigned replicas
2, 2014 at 11:20 PM, Jun Rao wrote:
> Is there an easy way to reproduce the issues that you saw?
>
> Thanks,
>
> Jun
>
> On Mon, Dec 1, 2014 at 6:31 AM, Karol Nowak wrote:
>
> > Hi,
> >
> > I observed some error messages / exceptions while running partiti
I am using kafka 0.8.
Yes I did run —verify, but got some weird output from it I had never seen
before that looked something like:
Status of partition reassignment:
ERROR: Assigned replicas (5,2) don't match the list of replicas for
reassignment (5) for partition [topic-1,248]
ERROR: Ass
in a 12 node cluster. A few weeks prior I
> did a partition reassignment to add four new kafka brokers to the cluster.
> This cluster has 4 topics on it each with 350 partitions each, a retention
> policy of 6 hours, and a replication factor of 1. Originally I attempted to
> run
Is there an easy way to reproduce the issues that you saw?
Thanks,
Jun
On Mon, Dec 1, 2014 at 6:31 AM, Karol Nowak wrote:
> Hi,
>
> I observed some error messages / exceptions while running partition
> reassignment on kafka 0.8.1.1 cluster. Being fairly new to this system I
weeks prior I did a partition
reassignment to add four new kafka brokers to the cluster. This cluster has 4
topics on it each with 350 partitions each, a retention policy of 6 hours, and
a replication factor of 1. Originally I attempted to run a migration on all of
the topics and partitions
Hi,
I observed some error messages / exceptions while running partition
reassignment on kafka 0.8.1.1 cluster. Being fairly new to this system I'm
not sure if these indicate serious failures or transient problems, or if
manual intervention is needed.
I used kafka-reassign-partitions.
Okay, so just to clarify, if I have a partition where the leader is broker
0, the ISR is [0, 1] and I make a partition reassignment with a new AR list
of [1, 0], broker 1 won't take over leadership? I was under the impression
that the "preferred" replica would become the leader
>
> Partition reassignment will not move the leader unless the old leader is
>
not part of the new set of replicas.
> Even when it does move the leader, it waits until the new replicas enter
> the ISR.
Okay, so just to clarify, if I have a partition where the leader is broker
0, t
My current interpretation is that if I start a partition reassignment, for
the sake of simplicity let's assume it's just for a single partition, the
new leader will first become a follower of the current leader, and when it
has caught up it'll transfer leadership over to its
ving
partitions between brokers.
My current interpretation is that if I start a partition reassignment, for
the sake of simplicity let's assume it's just for a single partition, the
new leader will first become a follower of the current leader, and when it
has caught up it'll transfer le
Hi,
I am currently running a Kafka 0.8.1.1 cluster with 8 servers. I would like
to add a new broker in the cluster. Each kafka instance has 400Go of data
and we are using replication factor equals to 3 with 50 partitions for each
topics.
I have check the documentation and especially the section d
Hi,
I am currently running a Kafka 0.8.1.1 cluster with 8 servers. I would like
to add a new broker in the cluster. Each kafka instance has 400Go of data
and we are using replication factor equals to 3 with 50 partitions for each
topics.
I have check the documentation and especially the section d
Hi,
I am currently running a Kafka 0.8.1.1 cluster with 8 servers. I would like
to add a new broker in the cluster. Each kafka instance has 400Go of data
and we are using replication factor equals to 3 with 50 partitions for each
topics.
I have check the documentation and especially the section d
<
luke.foreh...@networkedinsights.com> wrote:
>
> My hypothesis for how Partition [luke3,3] with leader 11, had offset reset
> to zero, caused by reboot of leader broker during partition reassignment:
>
> The replicas for [luke3,3] were in progress being reassigned from broker
>
My hypothesis for how Partition [luke3,3] with leader 11, had offset reset
to zero, caused by reboot of leader broker during partition reassignment:
The replicas for [luke3,3] were in progress being reassigned from broker
10,11,12 -> 11,12,13
I rebooted broker 11 which was the leader
-beta. I have a 4 node cluster with one broker per node, and a
> topic with 8 partitions and 3 replicas. Each partition has about 6
> million records.
>
> I generated a partition reassignment json that basically causes all
> partitions to be shifted by one broker. As the reassig
Hello,
I am testing kafka 0.8.1.1 in preparation for an upgrade from
kafka-0.8.1-beta. I have a 4 node cluster with one broker per node, and a
topic with 8 partitions and 3 replicas. Each partition has about 6
million records.
I generated a partition reassignment json that basically causes all
Which version of Kafka are you using?
Thanks,
Jun
On Mon, Apr 21, 2014 at 11:41 AM, Ryan Berdeen wrote:
> After doing some partition reassignments, I've ended up with some
> partitions that have both the old and new brokers assigned.
>
> The output of kafka-topics.sh --describe looks like thi
Hi Ryan,
Also KAFKA-1317 should be fixed in both trunk and latest 0.8.1 branch,
are you running with either or just with one of the previous released
versions?
Tim
On Mon, Apr 21, 2014 at 5:00 PM, Guozhang Wang wrote:
> Hi Ryan,
>
> Did you see any error logs on the new controller's controller
Hi Ryan,
Did you see any error logs on the new controller's controller log and
state-change log?
Guozhang
On Mon, Apr 21, 2014 at 11:41 AM, Ryan Berdeen wrote:
> After doing some partition reassignments, I've ended up with some
> partitions that have both the old and new brokers assigned.
>
>
After doing some partition reassignments, I've ended up with some
partitions that have both the old and new brokers assigned.
The output of kafka-topics.sh --describe looks like this:
Topic:cs-es-indexer-a PartitionCount:30 ReplicationFactor:2 Configs:
retention.ms=1080
...
Topic: cs-es-ind
On Thu, Dec 12, 2013 at 9:46 PM, Jun Rao wrote:
> Since we don't support delete topics yet, you would have to wipe out all ZK
> and kafka logs.
>
> Thanks,
>
> Jun
>
>
got it and done.
so it sounds like i should run a number of disparate clusters to spread
risk for topics since a partition is an
Since we don't support delete topics yet, you would have to wipe out all ZK
and kafka logs.
Thanks,
Jun
On Thu, Dec 12, 2013 at 9:32 PM, David Birdsong wrote:
> On Thu, Dec 12, 2013 at 9:28 PM, Jun Rao wrote:
>
> > Could you try starting from scratch again? The recent fix that we had may
> >
On Thu, Dec 12, 2013 at 9:28 PM, Jun Rao wrote:
> Could you try starting from scratch again? The recent fix that we had may
> not be able to recover a cluster already in an inconsistent state.
>
> Thanks,
>
> Jun
>
>
>
How does one start from scratch? Wipe ZK, is there some state file? I have
oth
On Thu, Dec 12, 2013 at 9:28 PM, Guozhang Wang wrote:
> David,
>
> Could you try to see if this is due to
> https://issues.apache.org/jira/browse/KAFKA-1178?
>
> Guozhang
>
Which node do I look for this on? Leader? ISR-candidate? Controller?
>
>
> On Thu, Dec 12, 2013 at 8:45 PM, David Birdson
David,
Could you try to see if this is due to
https://issues.apache.org/jira/browse/KAFKA-1178?
Guozhang
On Thu, Dec 12, 2013 at 8:45 PM, David Birdsong wrote:
> I was running a 2-node kafka cluster off github trunnk at:
> eedbea6526986783257ad0e025c451a8ee3d9095
>
> ...for a few weeks with no
Could you try starting from scratch again? The recent fix that we had may
not be able to recover a cluster already in an inconsistent state.
Thanks,
Jun
On Thu, Dec 12, 2013 at 8:45 PM, David Birdsong wrote:
> I was running a 2-node kafka cluster off github trunnk at:
> eedbea6526986783257ad0e
I was running a 2-node kafka cluster off github trunnk at:
eedbea6526986783257ad0e025c451a8ee3d9095
...for a few weeks with no issues. I recently downloaded the 0.8 stable
version, configured/started two new brokers with 0.8.
I successfully reassigned all but 1 partition from the older pair to th
al release? Any error in controller
> log?
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > >
> > > > > > On Fri, Dec 6, 2013 at 4:38 PM, Maxime N
; > > > Jun
> > > > >
> > > > >
> > > > > On Fri, Dec 6, 2013 at 4:38 PM, Maxime Nay
> > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > &g
gt; > >
> > > > > Hi,
> > > > >
> > > > > We are trying to add a broker to a 10 node cluster. We have 7
> > different
> > > > > topics, each of them is divided in 10 partitions, and their
> > replication
> > > >
ave 7
> different
> > > > topics, each of them is divided in 10 partitions, and their
> replication
> > > > factor is 3.
> > > >
> > > > To send traffic to this new node, we tried the
> > > kafka-reassign-partitions.sh
> > > >
1 - 100 of 114 matches
Mail list logo