When creating a new topic, we require # live brokers to be equal to or
larger than # replicas. Without enough brokers, can't complete the replica
assignment since we can't assign more than 1 replica on the same broker.

Thanks,

Jun


On Tue, Oct 15, 2013 at 1:47 PM, Jason Rosenberg <j...@squareup.com> wrote:

> Is there a fundamental reason for not allowing creation of new topics while
> in an under-replicated state?  For systems that use automatic topic
> creation, it seems like losing a node in this case is akin to the cluster
> being unavailable, if one of the nodes goes down, etc.
>
>
> On Tue, Oct 15, 2013 at 1:25 PM, Joel Koshy <jjkosh...@gmail.com> wrote:
>
> > Steve - that's right. I think Monika wanted clarification on what
> > would happen if replication factor is two and only one broker is
> > available. In that case, you won't be able to create new topics with
> > replication factor two (you should see an AdministrationException
> > saying the replication factor is larger than available brokers).
> >
> > However, you can send messages to available partitions of topics that
> > have already been created - because the ISR would shrink to only one
> > replica for those topics - although the cluster would be in an
> > under-replicated state. This is covered in the documentation
> > (http://kafka.apache.org/documentation.html#replication) under the
> > discussion about ISR.
> >
> > Thanks,
> >
> > Joel
> >
> > On Tue, Oct 15, 2013 at 10:19 AM, Steve Morin <st...@stevemorin.com>
> > wrote:
> > > If you have a double broker failure with replication factor of 2 and
> only
> > > have 2 brokers in the cluster.  Wouldn't every partition be not
> > available?
> > >
> > >
> > > On Tue, Oct 15, 2013 at 8:48 AM, Jun Rao <jun...@gmail.com> wrote:
> > >
> > >> If you have double broker failures with a replication factor of 2,
> some
> > >> partitions will not be available. When one of the brokers comes back,
> > the
> > >> partition is made available again (there is potential data loss), but
> > in an
> > >> under replicated mode. After the second broker comes back, it will
> > catch up
> > >> from the other replica and the partition will eventually be fully
> > >> replicated. There is no need to change the replication factor during
> > this
> > >> process.
> > >>
> > >> As for ZK, you can always use the full connection string. ZK will pick
> > live
> > >> servers to establish connections.
> > >>
> > >> Thanks,
> > >>
> > >> Jun
> > >>
> > >>
> > >> On Tue, Oct 15, 2013 at 3:46 AM, Monika Garg <gargmon...@gmail.com>
> > wrote:
> > >>
> > >> > I have 2 nodes kafka cluster with default.replication.factor=2,is
> set
> > in
> > >> > server.properties file.
> > >> >
> > >> > I removed one node-in removing that node,I killed Kafka
> > process,removed
> > >> all
> > >> > the kafka-logs and bundle from that node.
> > >> >
> > >> > Then I stopped my remaining running node in the cluster and started
> > >> > again(default.replication.factor is still set to 2 in this node
> > >> > server.properties file).
> > >> > I was expecting some error/exception as now I don't have two nodes
> in
> > my
> > >> > cluster.But I didn't get any error/exception and my node
> successfully
> > >> > started and I am able to create topics on it.
> > >> >
> > >> > So should the "default.replication.factor" be updated from
> > >> > "default.replication.factor=2" to "default.replication.factor=1" ,
> in
> > the
> > >> > remaining running node?
> > >> >
> > >> > Similarly if there are two external zookeeper
> > >> > nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and
> > now I
> > >> > have removed one zookeeper node(host1:port1) from the cluster,So
> > should
> > >> the
> > >> > property "zookeeper.connect" be updated from
> > >> > (zookeeper.connect=host1:port1,host2:port1) to
> > >> > (zookeeper.connect=host2:port1)?
> > >> >
> > >> > --
> > >> > *Moniii*
> > >> >
> > >>
> >
>

Reply via email to