Hi, All: When I tried find the bug in previous email, I tried to produce message on Kafka broker server on AWS instance by command line below: *$** bin/kafka-console-producer.sh --broker-list localhost:9092 --topic temp1* [2015-08-17 21:42:59,468] WARN Property topic is not valid (kafka.utils.VerifiableProperties) hi 1
*The error shown:* [2015-08-17 21:43:06,610] WARN Error while fetching metadata [{TopicMetadata for topic temp1 -> No partition metadata for topic temp1 due to kafka.common.LeaderNotAvailableException}] for topic [temp1]: class kafka.common.LeaderNotAvailableException (kafka.producer.BrokerPartitionInfo) [2015-08-17 21:43:06,615] WARN Error while fetching metadata [{TopicMetadata for topic temp1 -> No partition metadata for topic temp1 due to kafka.common.LeaderNotAvailableException}] for topic [temp1]: class kafka.common.LeaderNotAvailableException (kafka.producer.BrokerPartitionInfo) [2015-08-17 21:43:06,615] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: temp1 (kafka.producer.async.DefaultEventHandler) ....... [2015-08-17 21:43:07,039] ERROR Error in handling batch of 1 events (kafka.producer.async.ProducerSendThread) kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries. at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90) at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105) at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88) at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68) at scala.collection.immutable.Stream.foreach(Stream.scala:547) at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67) at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45) Does any one can give me some suggestion on it? Sincerely, Selina On Mon, Aug 17, 2015 at 2:25 PM, Job-Selina Wu <swucaree...@gmail.com> wrote: > Dear All: > I am looking for experts on Kafka to help me on remote Kafka java > producer configuration . > > My Kafka java broker and producer are at different *AWS* instances. > How should I set "metadata.broker.list" value. According to > https://kafka.apache.org/08/configuration.html, The format of " > metadata.broker.list"is host1:port1,host2:port2, and the list can be a > subset of brokers or a VIP pointing to a subset of brokers. > I am wondering what is the value of "*VIP pointing to a subset of > brokers"*, what is the correct value of metadata.broker.list > > My Kafka Broker server public ip address is 52.16.17.181 > My Kafka Broker server public DNS is > *ec2-51-16-17-181.us-west-1.compute.amazonaws.com > <http://ec2-51-18-21-235.us-west-1.compute.amazonaws.com/>* > > Is My producer configuration below right? Do I miss anything? > > //I think the value of *metadata.broker.list *is not right, but I don not > know what is the right value > props.put("*metadata.broker.list", "52.16.17.181:9092 > <http://52.16.17.181:9092>"*); > props.put("serializer.class", "kafka.serializer.StringEncoder"); > props.put("request.required.acks", "0"); > > My Kafka Broker Sever and Error at Kafka Producer java client side are > list below. > > This bug is blocking me a few days. Your help are highly appreciated. > > Sincerely, > Selina > > --------*The configs/server.properties at Kafka Broker Server at AWS*----- > > zookeeper.connect=localhost:2181 > zookeeper.connection.timeout.ms=6000 > > delete.topic.enable=true > > broker.id=0 > port=9092 > host.name=localhost > > advertised.host.name=ec2-51-16-17-181.us-west-1.compute.amazonaws.com > > > # below is same as default > #advertised.port=<port accessible by clients> > #advertised.port=<port accessible by clients> > num.network.threads=3 > num.io.threads=8 > socket.send.buffer.bytes=102400 > socket.receive.buffer.bytes=102400 > socket.request.max.bytes=104857600 > log.dirs=/tmp/kafka-logs > num.partitions=1 > num.recovery.threads.per.data.dir=1 > #log.flush.interval.messages=10000 > #log.flush.interval.ms=1000 > log.retention.hours=168 > #log.retention.bytes=1073741824 > log.segment.bytes=1073741824 > log.retention.check.interval.ms=300000 > log.cleaner.enable=false > > -------Error at Kafka Producer java client side ----- > > kafka.common.FailedToSendMessageException: Failed to send messages after 3 > tries. > kafka.common.FailedToSendMessageException: Failed to send messages after 3 > tries. > at > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90) > at kafka.producer.Producer.send(Producer.scala:77) > at kafka.javaapi.producer.Producer.send(Producer.scala:33) > at com.cinarra.kafka.Main.main(Main.java:21) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293) > at java.lang.Thread.run(Thread.java:745) >