Hi, I have created a zookeeper and three brokers as dockers in a physical host as shown below
[image: image.png] The followings are used to create Zookeeper and Kafka dockers docker run -d --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 jplock/zookeeper docker run -d --name kafka_broker0 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=50.140.197.220 -e ZOOKEEPER_IP=50.140.197.220 -e KAFKA_BROKER_ID=0 -e KAFKA_BROKER_PORT=9092 ches/kafka docker run -d --name kafka_broker1 -p 9093:9092 -e KAFKA_ADVERTISED_HOST_NAME=50.140.197.220 -e ZOOKEEPER_IP=50.140.197.220 -e KAFKA_BROKER_ID=1 -e KAFKA_BROKER_PORT=9092 ches/kafka docker run -d --name kafka_broker2 -p 9094:9092 -e KAFKA_ADVERTISED_HOST_NAME=50.140.197.220 -e ZOOKEEPER_IP=50.140.197.220 -e KAFKA_BROKER_ID=2 -e KAFKA_BROKER_PORT=9092 ches/kafka Note the mappings of ports to the port on the physical host. I have created the following topic that works ${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181 --replication-factor 1 --partitions 1 --topic three ${KAFKA_HOME}/bin/kafka-topics.sh --describe -zookeeper rhes75:2181 --topic three Topic:three PartitionCount:1 ReplicationFactor:1 Configs: Topic: three Partition: 0 Leader: 0 Replicas: 0 Isr: 0 *So there is only one partition and one replication factor* The following producer works fine cat ${IN_FILE} | ${KAFKA_HOME}/bin/kafka-console-producer.sh --broker-list rhes75:9092, rhes75:9093, rhes75:9094 --topic three However, when I define a topic as follows with --replication-factor 2 --partitions 2 hduser@rhes564: /data6/hduser/prices/avg_prices> rhes75:2181 --replication-factor 2 --partitions 2 --topic newone < Created topic "newone". hduser@rhes564: /data6/hduser/prices/avg_prices> ${KAFKA_HOME}/bin/kafka-topics.sh --describe -zookeeper rhes75:2181 --topic newone Topic:newone PartitionCount:2 ReplicationFactor:2 Configs: Topic: newone Partition: 0 Leader: 2 Replicas: 2,0 Isr: 2,0 Topic: newone Partition: 1 Leader: 0 Replicas: 0,1 Isr: 0 It throws errors! [2018-07-16 15:51:40,852] WARN [Producer clientId=console-producer] Got error produce response with correlation id 12 on topic-partition newone-0, retrying (1 attempts left). Error: NOT_LEADER_FOR_PARTITION (org.apache.kafka.clients.producer.internals.Sender) [2018-07-16 15:51:40,955] WARN [Producer clientId=console-producer] Got error produce response with correlation id 14 on topic-partition newone-0, retrying (0 attempts left). Error: NOT_LEADER_FOR_PARTITION (org.apache.kafka.clients.producer.internals.Sender) [2018-07-16 15:51:41,056] ERROR Error when sending message to topic newone with key: null, value: 67 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. [2018-07-16 15:51:41,059] ERROR Error when sending message to topic newone with key: null, value: 67 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. [2018-07-16 15:51:41,059] ERROR Error when sending message to topic newone with key: null, value: 68 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. [2018-07-16 15:51:41,060] ERROR Error when sending message to topic newone with key: null, value: 67 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. [2018-07-16 15:51:41,060] ERROR Error when sending message to topic newone with key: null, value: 67 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. I believe these Kafka brokers have problem talking to each other and the message is lost! Thanks Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* http://talebzadehmich.wordpress.com *Disclaimer:* Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.