There are no errors in the broker logs.
The Kafka Cluster in itself is functional. I have other producers and
consumers working which are in public subnet (same as kafka cluster).



On Wed, Jun 29, 2016 at 7:15 PM, Kamesh Kompella <kam...@chooxy.com> wrote:

> For what it's worth, I used to get similar messages with docker instances
> on centos.
>
> The way I debugged the problem was by looking at Kafka logs. In that case,
> it turned out that brokers could not reach zk and this info was in the
> logs. The logs will list the parameters the broker used at start up and any
> errors.
>
> In my case, the problem was the firewall that blocked access to zk from
> Kafka.
>
> > On Jun 29, 2016, at 6:56 PM, vivek thakre <vivek.tha...@gmail.com>
> wrote:
> >
> > I have Kafka Cluster setup on AWS Public Subnet with all brokers having
> > elastic IPs
> > My producers are on private subnet and not able to produce to the kafka
> on
> > public subnet.
> > Both subnets are in same VPC
> >
> > I added the private ip/cidr of producer ec2 instance to Public Kafka's
> > security group.
> > (I can telnet from private ec2 instance to brokers private ip on 9092
> port)
> >
> > From the ec2 instance on private subnet, I can list the topics using ZK's
> > private ip
> >
> > [ec2-user@ip-x-x-x-x kafka_2.10-0.9.0.1]$ bin/kafka-topics.sh
> --zookeeper
> > <zk_private_ip>:2181 --list
> > test
> >
> > When I try to produce from private ec2 instance using broker's private
> IP,
> > I get following error
> >
> > [ec2-user@ip-x-x-x-x kafka_2.10-0.9.0.1]$ bin/kafka-console-producer.sh
> > --broker-list <broker_private_ip>:9092 --topic test
> >
> > [2016-06-29 18:47:38,328] ERROR Error when sending message to topic test
> > with key: null, value: 3 bytes with error: Batch Expired
> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> >
> > When I try to produce from private ec2 instance using broker's public
> IP, I
> > get following error.
> >
> > [2016-06-29 18:53:15,918] ERROR Error when sending message to topic test
> > with key: null, value: 3 bytes with error: Failed to update metadata
> after
> > 60000 ms.
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> >
> > Few settings from server.properties
> > # The id of the broker. This must be set to a unique integer for each
> > broker.
> > broker.id=0
> >
> > ############################# Socket Server Settings
> > #############################
> >
> > listeners=PLAINTEXT://:9092
> >
> > # The port the socket server listens on
> > #port=9092
> >
> > # Hostname the broker will bind to. If not set, the server will bind to
> all
> > interfaces
> > host.name=<Public IP>
> >
> > # Hostname the broker will advertise to producers and consumers. If not
> > set, it uses the
> > # value for "host.name" if configured.  Otherwise, it will use the value
> > returned from
> > # java.net.InetAddress.getCanonicalHostName().
> > advertised.host.name=<Public IP>
> >
> > Please let me know if I am doing something wrong.
> >
> > Thank you
> >
> > Vivek
>

Reply via email to