Yes kane i have the replication factor configured as 3

On Tue, Jun 24, 2014 at 2:42 AM, Kane Kane <kane.ist...@gmail.com> wrote:

> Hello Neha, can you explain your statements:
> >>Bringing one node down in a cluster will go smoothly only if your
> replication factor is 1 and you enabled controlled shutdown on the brokers.
>
> Can you elaborate your notion of "smooth"? I thought if you have
> replication factor=3 in this case, you should be able to tolerate loss
> of a node?
>
> >>Also, bringing down 1 node our of a 3 node zookeeper cluster is risky,
> since any subsequent leader election might not reach a quorum.
>
> So, you mean ZK cluster of 3 nodes can't tolerate 1 node loss? I've
> seen many recommendations to run 3-nodes cluster, does it mean in
> cluster of 3 you won't be able to operate after loosing 1 node?
>
> Thanks.
>
> On Mon, Jun 23, 2014 at 9:04 AM, Neha Narkhede <neha.narkh...@gmail.com>
> wrote:
> > Bringing one node down in a cluster will go smoothly only if your
> > replication factor is 1 and you enabled controlled shutdown on the
> brokers.
> > Also, bringing down 1 node our of a 3 node zookeeper cluster is risky,
> > since any subsequent leader election might not reach a quorum. Having
> said
> > that, a partition going offline shouldn't cause a consumer's offset to
> > reset to an old value. How did you find out what the consumer's offset
> was?
> > Do you have your consumer's logs around?
> >
> > Thanks,
> > Neha
> >
> >
> > On Mon, Jun 23, 2014 at 12:28 AM, Hemath Kumar <hksrckmur...@gmail.com>
> > wrote:
> >
> >> We have a 3 node cluster ( 3 kafka + 3 ZK nodes ) . Recently we came
> across
> >> a strange issue where we wanted to bring one of the node down from
> cluster
> >> ( 1 kafka + 1 zookeeper) for doing a maintenance . But the movement we
> >> brought it to down on some of the topics ( only some partitions)
> consumers
> >> offset is reset some old value.
> >>
> >> Any reason why this is happened?. As of my knowledge when brought one
> node
> >> down its should work smoothly with out any impact.
> >>
> >> Thanks,
> >> Murthy Chelankuri
> >>
>

Reply via email to