My recollection is that you have to come up with the partition assignment
yourself, and pass the json file as an argument.
This is quite error prone, especially during an outage.

we quickly wrote kafkat to have a simple commands that would let us express
needs without having to deal with the assignment plan: "move topic X on
these brokers", "retire broker X", "set replication factor of topic X to
R", etc




On Sat, Mar 5, 2016 at 3:33 PM Guozhang Wang <wangg...@gmail.com> wrote:

> Hello Alexis,
>
> Could you share your findings about the command line tool? We can try to
> resolve if there's any issues.
>
> Guozhang
>
> On Fri, Mar 4, 2016 at 3:13 PM, Alexis Midon <
> alexis.mi...@airbnb.com.invalid> wrote:
>
> > The command line tool that ships with Kafka is error prone.
> >
> > Our standard procedure is:
> > 1. spin up the new broker
> > 2. use `kafkat drain <old broker id> [--brokers <new broker id>]
> > 3. shut down old broker
> >
> > The `drain` command will generate and submit a partition assignment plan
> > where the new broker id replaces the old one. It's pretty much a
> "gsub(old,
> > new)".
> >
> > We do it regularly. It's almost a mundane operation. The only challenge
> is
> > the volume of data being transferred over the network. Since there is no
> > throttling mechanism, the network is sometime saturated which might
> impact
> > other consumers/producers
> >
> > See https://github.com/airbnb/kafkat
> >
> >
> >
> >
> >
> > On Fri, Mar 4, 2016 at 7:28 AM Todd Palino <tpal...@gmail.com> wrote:
> >
> > > To answer your questions…
> > >
> > > 1 - Not in the way you want it to. There is a setting for automatic
> > leader
> > > election (which I do not recommend anyone use at this time), but all
> that
> > > does is pick which of the currently assigned replicas should be the
> > leader.
> > > It does not reassign partitions from one broker to another. Kafka does
> > not
> > > have a facility for doing this automatically.
> > >
> > > 2 - No. The most you can do is move all the partitions off and then
> > > immediately shut down the broker process. Any broker that is live in
> the
> > > cluster can, and will, get partitions assigned to it by the controller.
> > >
> > > For what you want to do, you need you use the partition reassignment
> > > command line tool that ships with Kafka to reassign partitions from the
> > old
> > > broker to the new one. Once that is complete, you can double check that
> > the
> > > old broker has no partitions left and shut it down. I have a tool that
> we
> > > use internally to make this a lot easier, and I’m in the process of
> > getting
> > > a repository set up to make it available via open source. It allows for
> > > more easily removing and adding brokers, and rebalancing partitions in
> a
> > > cluster without having to craft the reassignments by hand.
> > >
> > > -Todd
> > >
> > >
> > > On Fri, Mar 4, 2016 at 5:07 AM, Muqtafi Akhmad <muqt...@traveloka.com>
> > > wrote:
> > >
> > > > dear Kafka users,
> > > >
> > > > I have some questions regarding decommissioning kafka broker node and
> > > > replacing it with the new one. Lets say that we have three broker
> nodes
> > > and
> > > > each topic in Kafka has replication factor = 3, we upgrade one node
> > with
> > > > the following steps :
> > > > 1. add one broker node to cluster
> > > > 2. shutdown old broker node
> > > >
> > > > My questions are
> > > > 1. When we add one new broker to the cluster will it trigger Kafka
> > topic
> > > /
> > > > group leadership rebalance?
> > > > 2. Is there any way to disable the to-be-decommissioned node to hold
> no
> > > > topic/group leadership (acting as passive copy) so that it can be
> > > > decommissioned with minimal effect to Kafka clients?
> > > >
> > > > Thank you,
> > > >
> > > > --
> > > > Muqtafi Akhmad
> > > > Software Engineer
> > > > Traveloka
> > > >
> > >
> > >
> > >
> > > --
> > > *—-*
> > > *Todd Palino*
> > > Staff Site Reliability Engineer
> > > Data Infrastructure Streaming
> > >
> > >
> > >
> > > linkedin.com/in/toddpalino
> > >
> >
>
>
>
> --
> -- Guozhang
>

Reply via email to