That's not how Kafka works. You need to pass the full list of brokers. On Tuesday, March 11, 2014, A A <andthereitg...@hotmail.com> wrote:
> Hi again. > > Got the setup working. I now have 2 brokers (broker 1 and broker 2) with > one remote zk. I was also able to create some topics > > $KAFKA_HOME/bin/kafka-list-topic.sh --zookeeper 192.168.1.120:2181 > topic: test partition: 0 leader: 1 replicas: 1 isr: 1 > topic: test1 partition: 0 leader: 1 replicas: 1 isr: 1 > > I then try to publish a message from broker1 to broker 1 (localhost) while > I am consuming on broker2 > $KAFKA_HOME/bin/kafka-console-producer.sh --broker-list localhost:9092 > --topic test1 > helloooo > [2014-03-11 07:36:35,206] WARN Error while fetching metadata > [{TopicMetadata for topic test1 -> > No partition metadata for topic test1 due to > kafka.common.LeaderNotAvailableException}] for topic [test1]: class > kafka.common.LeaderNotAvailableException (kafka.pr > oducer.BrokerPartitionInfo) > [2014-03-11 07:36:35,244] WARN Error while fetching metadata > [{TopicMetadata for topic test1 -> > No partition metadata for topic test1 due to > kafka.common.LeaderNotAvailableException}] for topic [test1]: class > kafka.common.LeaderNotAvailableException (kafka.pr > oducer.BrokerPartitionInfo) > [2014-03-11 07:36:35,248] ERROR Failed to collate messages by topic, > partition due to: Failed to fetch topic metadata for topic: test1 > (kafka.producer.async.Default > EventHandler) > [2014-03-11 07:36:35,358] WARN Error while fetching metadata > [{TopicMetadata for topic test1 -> > No partition metadata for topic test1 due to > kafka.common.LeaderNotAvailableException}] for topic [test1]: class > kafka.common.LeaderNotAvailableException (kafka.pr > oducer.BrokerPartitionInfo) > [2014-03-11 07:36:35,368] WARN Error while fetching metadata > [{TopicMetadata for topic test1 -> > No partition metadata for topic test1 due to > kafka.common.LeaderNotAvailableException}] for topic [test1]: class > kafka.common.LeaderNotAvailableException (kafka.pr > oducer.BrokerPartitionInfo) > [2014-03-11 07:36:35,368] ERROR Failed to collate messages by topic, > partition due to: Failed to fetch topic metadata for topic: test1 > (kafka.producer.async.Default > EventHandler) > [2014-03-11 07:36:35,476] WARN Error while fetching metadata > [{TopicMetadata for topic test1 -> > No partition metadata for topic test1 due to > kafka.common.LeaderNotAvailableException}] for topic [test1]: class > kafka.common.LeaderNotAvailableException (kafka.pr > oducer.BrokerPartitionInfo) > [2014-03-11 07:36:35,490] WARN Error while fetching metadata > [{TopicMetadata for topic test1 -> > No partition metadata for topic test1 due to > kafka.common.LeaderNotAvailableException}] for topic [test1]: class > kafka.common.LeaderNotAvailableException (kafka.pr > oducer.BrokerPartitionInfo) > [2014-03-11 07:36:35,490] ERROR Failed to collate messages by topic, > partition due to: Failed to fetch topic metadata for topic: test1 > (kafka.producer.async.Default > EventHandler) > [2014-03-11 07:36:35,599] WARN Error while fetching metadata > [{TopicMetadata for topic test1 -> > No partition metadata for topic test1 due to > kafka.common.LeaderNotAvailableException}] for topic [test1]: class > kafka.common.LeaderNotAvailableException (kafka.pr > oducer.BrokerPartitionInfo) > [2014-03-11 07:36:35,609] WARN Error while fetching metadata > [{TopicMetadata for topic test1 -> > No partition metadata for topic test1 due to > kafka.common.LeaderNotAvailableException}] for topic [test1]: class > kafka.common.LeaderNotAvailableException (kafka.pr > oducer.BrokerPartitionInfo) > [2014-03-11 07:36:35,609] ERROR Failed to collate messages by topic, > partition due to: Failed to fetch topic metadata for topic: test1 > (kafka.producer.async.Default > EventHandler) > [2014-03-11 07:36:35,721] WARN Error while fetching metadata > [{TopicMetadata for topic test1 -> > No partition metadata for topic test1 due to > kafka.common.LeaderNotAvailableException}] for topic [test1]: class > kafka.common.LeaderNotAvailableException (kafka.pr > oducer.BrokerPartitionInfo) > [2014-03-11 07:36:35,723] ERROR Failed to send requests for topics test1 > with correlation ids in [0,8] (kafka.producer.async.DefaultEventHandler) > [2014-03-11 07:36:35,725] ERROR Error in handling batch of 1 events > (kafka.producer.async.ProducerSendThread) > kafka.common.FailedToSendMessageException: Failed to send messages after 3 > tries. > at > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90) > at > kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104) > at > kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87) > at > kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67) > at scala.collection.immutable.Stream.foreach(Stream.scala:254) > at > kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66) > at > kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44) > > What am I missing? > > > From: andthereitg...@hotmail.com <javascript:;> > > To: users@kafka.apache.org <javascript:;> > > Subject: RE: Remote Zookeeper > > Date: Tue, 11 Mar 2014 02:29:34 +0000 > > > > Thanks. Totally missed that. > > > > > From: b...@b3k.us <javascript:;> > > > Date: Mon, 10 Mar 2014 19:18:50 -0700 > > > Subject: Re: Remote Zookeeper > > > To: users@kafka.apache.org <javascript:;> > > > > > > zookeeper.connect > > > > > > https://kafka.apache.org/08/configuration.html > > > > > > > > > On Mon, Mar 10, 2014 at 7:17 PM, A A > > > <andthereitg...@hotmail.com<javascript:;>> > wrote: > > > > > > > Hi > > > > > > > > Pretty new to Kafka. Have been successful in installing Kafka 0.8.0. > > > > I am just wondering how should I make my kafka cluster (2 brokers) > connect > > > > to a single remote zookeper server? > > > > > > > > I am using $KAFKA/kafka-server-start.sh > $KAFKA_CONFIG/server.properties > > > > on both the brokers to start them up > > > > and I have a remote zookeeper running at 192.168.1.120 > > > > > > > > Can anyone help? > > > > > > > > A > > > > > > > > > > >