Hi Robert,

I tried  *kill -SIGTERM xxxx * comand to stop Kafka Broker but it shutdown
all other brokers as well.

Can you please suggest me, How can I avoid stopping other brokers ?

Is there any configuration changes required ?

Regards, Rafeeq S


On Tue, Jun 3, 2014 at 9:40 AM, Robert Hodges <berkeleybob2...@gmail.com>
wrote:

> Hi Rafeeq,
>
> You can stop them individually by killing the processes.  The
> kafka-server-stop.sh command just uses a kill -SIGTERM if you look at the
> end of the script:
>
> ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print
> $1}' | xargs kill -SIGTERM
>
> So 'kill -SIGTERM 3026' would kill a broker process with PID 3026.
>
> Cheers, Robert
>
>
> On Mon, Jun 2, 2014 at 11:04 PM, rafeeq s <rafeeq.ec...@gmail.com> wrote:
>
> > Thanks Robert,
> >
> > For #2 Yes Robert, I am using kafka-server-stop.sh script to stop Brokers
> > and it all resides on same Host with different port.
> >
> > Is there any way to avoid/prevent shutdown of all brokers ?
> >
> > Thanks for you kind response!
> >
> >
> >
> >
> > On Mon, Jun 2, 2014 at 10:37 PM, Robert Hodges <
> berkeleybob2...@gmail.com>
> > wrote:
> >
> > > Hi Rafeeq,
> > >
> > > With respect to question #2, are you stopping brokers using
> > > kafka-server-stop.sh and are they all on a single host?  If so, the
> > script
> > > finds anything that looks like a Kafka server and should knock out all
> > the
> > > brokers at once.  If your cluster runs across multiple hosts something
> > else
> > > is going on.
> > >
> > > Cheers, Robert
> > >
> > > p.s., Same thing applies for the zookeeper-server-stop.sh script.
> > >
> > >
> > > On Mon, Jun 2, 2014 at 10:23 AM, rafeeq s <rafeeq.ec...@gmail.com>
> > wrote:
> > >
> > > > Hi
> > > >
> > > > I am using kafka 0.8.1 version and facing frequent issue when *kafka
> > > > broker*
> > > > startup/restart  such as:
> > > >
> > > > *1.Whenever Kafka Broker restarted it gets  shutdown and throws
> > following
> > > > error in all Broker nodes.*
> > > >
> > > > java.io.EOFException: Received -1 when reading from channel, socket
> has
> > > > likely been closed.
> > > >         at kafka.utils.Utils$.read(Utils.scala:376)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:67)
> > > >         at
> > > > kafka.network.Receive$class.readCompletely(Transmission.scala:56)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
> > > >         at
> > > kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
> > > >         at
> > > > kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
> > > >         at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
> > > >         at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > > >         at
> > kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:107)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
> > > >         at
> > > >
> > kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
> > > >         at
> > > kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> > > > *or Some time it throws below error: *
> > > > ERROR [KafkaApi-151] Error while fetching metadata for partition
> > > [BACKUP,1]
> > > > (kafka.server.KafkaApis)
> > > > kafka.common.ReplicaNotAvailableException
> > > >         at
> > > >
> > kafka.server.KafkaApis$$anonfun$20$$anonfun$23.apply(KafkaApis.scala:589)
> > > >         at
> > > >
> > kafka.server.KafkaApis$$anonfun$20$$anonfun$23.apply(KafkaApis.scala:574)
> > > >         at
> > > >
> > > >
> > >
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> > > >         at
> > > >
> > > >
> > >
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> > > >
> > > >
> > > > *2.If  I stop single kafka broker, it will shutdown all other brokers
> > in
> > > > the cluster ?*
> > > >
> > > > When I try to stop single kafka Broker ,it terminates all other kafka
> > > > brokers.
> > > >
> > > > Any guess why all kafka broker nodes get terminated on stop of single
> > > > Broker ?
> > > >
> > > > Thanks in advance , your answer will save my time.
> > > >
> > >
> >
>

Reply via email to