Use zookeeper-shell script: ./bin/zookeeper-shell.sh <zkhost>:<zkport></namespace> get /brokers/ids/<your broker id>
On Wed, Feb 05, 2014 at 07:04:50AM +0000, Balasubramanian Jayaraman (Contingent) wrote: > Where should I look for these information. From the logs, I could see > ZooKeeper is bound to port 2181 and IP 0.0.0.0. The Kafka Server is started > in port 9082 and bind to IP 10.x.x.x. > If I don't give the host.name in server.properties, I get " > java.nio.channels.UnresolvedAddressException" and If I give the host.name to > the local IP "10.x.x.x" I get "ConnectException. > It is the same behavior as in 0.8.0. > > Thanks > Bala > > > -----Original Message----- > From: Jun Rao [mailto:jun...@gmail.com] > Sent: Wednesday, February 05, 2014 12:21 AM > To: users@kafka.apache.org > Subject: Re: Reg Exception in Kafka > > It seems what's registered in ZK (10.199.31.87 <http://10.199.31.87:9094/>) > is still the local ip, not the public one. Could you check the broker > registration in zk ( > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper) > and see what's the host/port of the broker? > > Thanks, > > Jun > > > On Tue, Feb 4, 2014 at 1:54 AM, Balasubramanian Jayaraman (Contingent) < > balasubramanian.jayara...@autodesk.com> wrote: > > > I downloaded from the trunk and set up the properties > > host.name= <local IP> > > advertise.host.name=<public IP> > > > > Even after this change, I get the ConnectException. The details logs are > > given below. Is there any workaround for this? > > > > [ INFO] [main 2014-02-04 17:46:01,775] Disconnecting from > > 54.241.44.129:9094 > > [DEBUG] [main 2014-02-04 17:46:01,780] Successfully fetched metadata for 1 > > topic(s) Set(mytopic) > > [DEBUG] [main 2014-02-04 17:46:01,798] Getting broker partition info for > > topic mytopic > > [DEBUG] [main 2014-02-04 17:46:01,799] Partition [mytopic,0] has leader 3 > > [DEBUG] [main 2014-02-04 17:46:01,807] Broker partitions registered for > > topic: mytopic are 0 > > [DEBUG] [main 2014-02-04 17:46:01,820] Sending 1 messages with no > > compression to [mytopic,0] > > [DEBUG] [main 2014-02-04 17:46:01,833] Producer sending messages with > > correlation id 2 for topics [mytopic,0] to broker 3 on 10.199.31.87:9094 > > [ERROR] [main 2014-02-04 17:46:22,850] Producer connection to > > 10.199.31.87:9094 unsuccessful > > java.net.ConnectException: Connection timed out: connect > > at sun.nio.ch.Net.connect0(Native Method) > > at sun.nio.ch.Net.connect(Net.java:465) > > at sun.nio.ch.Net.connect(Net.java:457) > > at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:666) > > at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) > > at kafka.producer.SyncProducer.connect(SyncProducer.scala:146) > > at > > kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161) > > at > > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68) > > at > > kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:102) > > at > > kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:102) > > at > > kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:102) > > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) > > at > > kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:101) > > at > > kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:101) > > at > > kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:101) > > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) > > at kafka.producer.SyncProducer.send(SyncProducer.scala:100) > > at > > kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:254) > > at > > kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:106) > > at > > kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:100) > > at > > scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80) > > at > > scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80) > > at scala.collection.Iterator$class.foreach(Iterator.scala:631) > > at > > scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:161) > > at > > scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:194) > > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) > > at scala.collection.mutable.HashMap.foreach(HashMap.scala:80) > > at > > kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100) > > at > > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) > > at kafka.producer.Producer.send(Producer.scala:76) > > at kafka.javaapi.producer.Producer.send(Producer.scala:33) > > at > > com.autodesk.kafka.test.utils.KafkaProducer.sendMessage(KafkaProducer.java:48) > > at > > com.autodesk.kafka.test.integration.KafkaProducerTest.testProducer(KafkaProducerTest.java:33) > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > at > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > at > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:606) > > at junit.framework.TestCase.runTest(TestCase.java:154) > > at junit.framework.TestCase.runBare(TestCase.java:127) > > at junit.framework.TestResult$1.protect(TestResult.java:106) > > at junit.framework.TestResult.runProtected(TestResult.java:124) > > at junit.framework.TestResult.run(TestResult.java:109) > > at junit.framework.TestCase.run(TestCase.java:118) > > at junit.framework.TestSuite.runTest(TestSuite.java:208) > > at junit.framework.TestSuite.run(TestSuite.java:203) > > at > > org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:131) > > at > > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > > at > > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > > at > > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > > at > > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > > at > > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > > [ WARN] [main 2014-02-04 17:46:22,862] Failed to send producer request > > with correlation id 2 to broker 3 with data for partitions [mytopic,0] > > > > > > > > -----Original Message----- > > From: Jun Rao [mailto:jun...@gmail.com] > > Sent: Wednesday, January 29, 2014 11:12 PM > > To: users@kafka.apache.org > > Subject: Re: Reg Exception in Kafka > > > > Hmm, it's weird that EC2 only allows you to bind to local ip. Could some > > EC2 users here help out? > > > > Also, we recently added https://issues.apache.org/jira/browse/KAFKA-1092, > > which allows one to use a different ip for binding and connecting. You can > > see if this works for you. The patch is only in trunk though. > > > > Thanks, > > > > Jun > > > > > > On Tue, Jan 28, 2014 at 10:10 PM, Balasubramanian Jayaraman (Contingent) < > > balasubramanian.jayara...@autodesk.com> wrote: > > > > > I don't think so. I forgot to include the ifconfig output. Actually > > > the public IP is not one of the IP configured in the Ethernet interfaces. > > > Only the Local IP is configured in eth0. > > > Is there any solution to this? > > > > > > Ifconfig O/P: > > > > > > eth0 Link encap:Ethernet HWaddr 22:00:0A:C7:1F:57 > > > inet addr:10.X.X.X Bcast:10.199.31.127 Mask:255.255.255.192 > > > inet6 addr: fe80::2000:aff:fec7:1f57/64 Scope:Link > > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > > RX packets:83186 errors:0 dropped:0 overruns:0 frame:0 > > > TX packets:91285 errors:0 dropped:0 overruns:0 carrier:0 > > > collisions:0 txqueuelen:1000 > > > RX bytes:40233350 (38.3 MiB) TX bytes:15089154 (14.3 MiB) > > > Interrupt:25 > > > > > > lo Link encap:Local Loopback > > > inet addr:127.0.0.1 Mask:255.0.0.0 > > > inet6 addr: ::1/128 Scope:Host > > > UP LOOPBACK RUNNING MTU:16436 Metric:1 > > > RX packets:1379711 errors:0 dropped:0 overruns:0 frame:0 > > > TX packets:1379711 errors:0 dropped:0 overruns:0 carrier:0 > > > collisions:0 txqueuelen:0 > > > RX bytes:109133672 (104.0 MiB) TX bytes:109133672 (104.0 > > > MiB) > > > > > > Thanks > > > Bala > > > > > > -----Original Message----- > > > From: Jun Rao [mailto:jun...@gmail.com] > > > Sent: Wednesday, January 29, 2014 12:27 PM > > > To: users@kafka.apache.org > > > Subject: Re: Reg Exception in Kafka > > > > > > Could it be a port conflict? > > > > > > Thanks, > > > > > > Jun > > > > > > > > > On Tue, Jan 28, 2014 at 5:20 PM, Balasubramanian Jayaraman > > > (Contingent) < balasubramanian.jayara...@autodesk.com> wrote: > > > > > > > Jun, > > > > > > > > Thanks for your help. > > > > I get the following exception : > > > > kafka.common.KafkaException: Socket server failed to bind to > > > > 54.241.44.129:9092: Cannot assign requested address. > > > > at > > > kafka.network.Acceptor.openServerSocket(SocketServer.scala:188) > > > > at kafka.network.Acceptor.<init>(SocketServer.scala:134) > > > > at kafka.network.SocketServer.startup(SocketServer.scala:61) > > > > at kafka.server.KafkaServer.startup(KafkaServer.scala:77) > > > > at > > > > > > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34) > > > > at kafka.Kafka$.main(Kafka.scala:46) > > > > at kafka.Kafka.main(Kafka.scala) Caused by: > > > > java.net.BindException: Cannot assign requested address > > > > at sun.nio.ch.Net.bind0(Native Method) > > > > at sun.nio.ch.Net.bind(Net.java:174) > > > > at > > > > > > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139) > > > > at > > > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77) > > > > at > > > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:70) > > > > at > > > kafka.network.Acceptor.openServerSocket(SocketServer.scala:184) > > > > ... 6 more > > > > > > > > The entire stack trace of the logs are placed below. > > > > > > > > [2014-01-29 01:18:23,136] INFO Verifying properties > > > > (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,176] INFO Property host.name is overridden to > > > > 54.241.44.129 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,177] INFO Property port is overridden to 9092 > > > > (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,177] INFO Property socket.request.max.bytes is > > > > overridden to 104857600 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,177] INFO Property num.io.threads is overridden > > > > to 2 > > > > (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,178] INFO Property log.dirs is overridden to > > > > /tmp/kafka-logs-1 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,178] INFO Property log.cleanup.interval.mins is > > > > overridden to 1 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,178] INFO Property socket.send.buffer.bytes is > > > > overridden to 1048576 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,179] INFO Property log.flush.interval.ms is > > > > overridden to 1000 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,179] INFO Property zookeeper.connect is > > > > overridden to > > > > localhost:2181 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,180] INFO Property broker.id is overridden to 1 > > > > (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,180] INFO Property log.retention.hours is > > > > overridden to 168 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,180] INFO Property num.network.threads is > > > > overridden to 2 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,180] INFO Property socket.receive.buffer.bytes > > > > is overridden to 1048576 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,181] INFO Property > > > > zookeeper.connection.timeout.msis overridden to 1000000 > > > > (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,181] INFO Property num.partitions is overridden > > > > to 2 > > > > (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,181] INFO Property log.flush.interval.messages > > > > is overridden to 10000 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,182] INFO Property log.segment.bytes is > > > > overridden to > > > > 536870912 (kafka.utils.VerifiableProperties) > > > > [2014-01-29 01:18:23,198] INFO [Kafka Server 1], Starting > > > > (kafka.server.KafkaServer) > > > > [2014-01-29 01:18:23,248] INFO [Log Manager on Broker 1] Starting > > > > log cleaner every 60000 ms (kafka.log.LogManager) > > > > [2014-01-29 01:18:23,260] INFO [Log Manager on Broker 1] Starting > > > > log flusher every 3000 ms with the following overrides Map() > > > > (kafka.log.LogManager) > > > > [2014-01-29 01:18:23,330] FATAL Fatal error during KafkaServerStable > > > > startup. Prepare to shutdown (kafka.server.KafkaServerStartable) > > > > kafka.common.KafkaException: Socket server failed to bind to > > > > 54.241.44.129:9092: Cannot assign requested address. > > > > at > > > kafka.network.Acceptor.openServerSocket(SocketServer.scala:188) > > > > at kafka.network.Acceptor.<init>(SocketServer.scala:134) > > > > at kafka.network.SocketServer.startup(SocketServer.scala:61) > > > > at kafka.server.KafkaServer.startup(KafkaServer.scala:77) > > > > at > > > > > > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34) > > > > at kafka.Kafka$.main(Kafka.scala:46) > > > > at kafka.Kafka.main(Kafka.scala) Caused by: > > > > java.net.BindException: Cannot assign requested address > > > > at sun.nio.ch.Net.bind0(Native Method) > > > > at sun.nio.ch.Net.bind(Net.java:174) > > > > at > > > > > > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139) > > > > at > > > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77) > > > > at > > > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:70) > > > > at > > > kafka.network.Acceptor.openServerSocket(SocketServer.scala:184) > > > > ... 6 more > > > > [2014-01-29 01:18:23,333] INFO [Kafka Server 1], Shutting down > > > > (kafka.server.KafkaServer) > > > > [2014-01-29 01:18:23,335] INFO [Socket Server on Broker 1], Shutting > > > > down > > > > (kafka.network.SocketServer) > > > > [2014-01-29 01:18:23,339] INFO [Socket Server on Broker 1], Shutdown > > > > completed (kafka.network.SocketServer) > > > > [2014-01-29 01:18:23,341] INFO Shutdown Kafka scheduler > > > > (kafka.utils.KafkaScheduler) > > > > [2014-01-29 01:18:23,360] INFO [Kafka Server 1], Shut down completed > > > > (kafka.server.KafkaServer) > > > > [2014-01-29 01:18:23,384] INFO [Kafka Server 1], Shutting down > > > > (kafka.server.KafkaServer) > > > > > > > > Regards > > > > Bala > > > > > > > > -----Original Message----- > > > > From: Jun Rao [mailto:jun...@gmail.com] > > > > Sent: Tuesday, January 28, 2014 11:30 PM > > > > To: users@kafka.apache.org > > > > Subject: Re: Reg Exception in Kafka > > > > > > > > You should should use the public IP for host.name. What's the error > > > > you see during broker startup? > > > > > > > > Thanks, > > > > > > > > Jun > > > > > > > > > > > > On Tue, Jan 28, 2014 at 2:17 AM, Balasubramanian Jayaraman > > > > (Contingent) < balasubramanian.jayara...@autodesk.com> wrote: > > > > > > > > > I checked the faq. I did change the host.name in server properties. > > > > > After changing it I get ConnectException. > > > > > > > > > > The problem here is in EC2 have a different public IP address > > > > > (55.x.x.x) and the local IP address is (10.x.x.x). > > > > > I set the host.name property to the local IP address which is > > > > > 10.x.x.x. I think because of this there is a ConnectException. > > > > > When I set the host.name to the public ip address (55.x.x.x), I > > > > > cannot even start the broker. > > > > > > > > > > What should be the IP address that is to be given in > > host.nameproperty. > > > > > > > > > > Thanks > > > > > Bala > > > > > > > > > > -----Original Message----- > > > > > From: Jun Rao [mailto:jun...@gmail.com] > > > > > Sent: Tuesday, January 28, 2014 1:11 AM > > > > > To: users@kafka.apache.org > > > > > Subject: Re: Reg Exception in Kafka > > > > > > > > > > Have you looked at > > > > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-OnEC2,wh > > > > > yc an 'tmyhigh-levelconsumersconnecttothebrokers > > > > > ? > > > > > > > > > > thanks, > > > > > > > > > > Jun > > > > > > > > > > > > > > > On Mon, Jan 27, 2014 at 12:17 AM, Balasubramanian Jayaraman > > > > > (Contingent) < balasubramanian.jayara...@autodesk.com> wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > I have a remote server (EC2) setup with Kafka cluster setup. > > > > > > There are > > > > > > 3 brokers each running in the port 9092,9093,9094. The zookeeper > > > > > > is running in the port 2181. > > > > > > When I send message to the brokers from my PC, I get an > > > > > > exception which is given below. I did a dump in the remote > > > > > > server, the request is received in the remote server. > > > > > > I am able to locally test the consumer/producer script present > > > > > > in the bin folder. What am I missing? Can you kindly help me in > > > > > > this > > > > error? > > > > > > Any help will be highly grateful. > > > > > > > > > > > > [ INFO] [main 2014-01-27 16:06:50,083] Verifying properties [ > > > > > > INFO] [main 2014-01-27 16:06:50,108] Property > > > > > > metadata.broker.list is overridden to > > > > > > 54.241.44.129:9092,54.241.44.129:9093,54.241.44.129:9094 > > > > > > [ INFO] [main 2014-01-27 16:06:50,108] Property > > > > > > request.required.acks is overridden to 1 [ INFO] [main > > > > > > 2014-01-27 16:06:50,108] Property key.serializer.class is > > > > > > overridden to kafka.serializer.StringEncoder [ INFO] [main > > > > > > 2014-01-27 16:06:50,108] Property serializer.class is overridden > > > > > > to kafka.utils.EncryptEncoder [ INFO] [main 2014-01-27 > > > > > > 16:06:50,154] > > > > > > send: encrypted - Message_1 [DEBUG] [main 2014-01-27 > > > > > > 16:06:50,298] Handling 1 events [ INFO] [main > > > > > > 2014-01-27 15:59:43,540] Fetching metadata from broker > > > > > > id:0,host:54.241.44.129,port:9093 with correlation id 0 for 1 > > > > > > topic(s) > > > > > > Set(mytopic) > > > > > > [DEBUG] [main 2014-01-27 15:59:43,737] Created socket with > > > > > > SO_TIMEOUT = > > > > > > 10000 (requested 10000), SO_RCVBUF = 8192 (requested -1), > > > > > > SO_SNDBUF = > > > > > > 102400 (requested 102400). > > > > > > [ INFO] [main 2014-01-27 15:59:43,738] Connected to > > > > > > 54.241.44.129:9093for producing [ INFO] [main 2014-01-27 > > > > > > 15:59:44,018] Disconnecting from > > > > > > 54.241.44.129:9093 > > > > > > [DEBUG] [main 2014-01-27 15:59:44,025] Successfully fetched > > > > > > metadata for 1 > > > > > > topic(s) Set(mytopic) > > > > > > [DEBUG] [main 2014-01-27 15:59:44,058] Getting broker partition > > > > > > info for topic mytopic [DEBUG] [main 2014-01-27 15:59:44,060] > > > > > > Partition [mytopic,0] has leader 2 [DEBUG] [main 2014-01-27 > > > > > > 15:59:44,072] Broker partitions registered for > > > > > > topic: mytopic are 0 > > > > > > [DEBUG] [main 2014-01-27 15:59:44,091] Sending 1 messages with > > > > > > no compression to [mytopic,0] [DEBUG] [main 2014-01-27 > > > > > > 15:59:44,109] Producer sending messages with correlation id 2 > > > > > > for topics [mytopic,0] to broker 2 on > > > > > > ip-10-199-31-87.us-west-1.compute.internal:9093 > > > > > > [ERROR] [main 2014-01-27 15:59:44,129] Producer connection to > > > > > > ip-10-199-31-87.us-west-1.compute.internal:9093 unsuccessful > > > > > > java.nio.channels.UnresolvedAddressException > > > > > > at sun.nio.ch.Net.checkAddress(Net.java:127) > > > > > > at > > > > > sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:640) > > > > > > at > > > > kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) > > > > > > at > > kafka.producer.SyncProducer.connect(SyncProducer.scala:146) > > > > > > at > > > > > > > > > kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161 > > > ) > > > > > > at > > > > > > > > > > > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Sy > > > > > nc > > > > > Pr > > > > > oducer.scala:68) > > > > > > at > > > > > > > > > > > kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1. > > > > > ap > > > > > ply$mcV$sp(SyncProducer.scala:102) > > > > > > at > > > > > > > > > > > kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1. > > > > > ap > > > > > ply(SyncProducer.scala:102) > > > > > > at > > > > > > > > > > > kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1. > > > > > ap > > > > > ply(SyncProducer.scala:102) > > > > > > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) > > > > > > at > > > > > > > > > > > > > kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer. > > > > > scala:101) > > > > > > at > > > > > > > > > > kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala > > > > :1 > > > > 01) > > > > > > at > > > > > > > > > > kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala > > > > :1 > > > > 01) > > > > > > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) > > > > > > at kafka.producer.SyncProducer.send(SyncProducer.scala:100) > > > > > > at > > > > > > > > > > > kafka.producer.async.DefaultEventHandler.kafka$producer$async$Defa > > > > > ul > > > > > tE > > > > > ventHandler$$send(DefaultEventHandler.scala:254) > > > > > > at > > > > > > > > > > > kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializ > > > > > ed > > > > > Da > > > > > ta$1.apply(DefaultEventHandler.scala:106) > > > > > > at > > > > > > > > > > > kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializ > > > > > ed > > > > > Da > > > > > ta$1.apply(DefaultEventHandler.scala:100) > > > > > > at > > > > > > > > > > > scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap. > > > > > sc > > > > > al > > > > > a:80) > > > > > > at > > > > > > > > > > > scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap. > > > > > sc > > > > > al > > > > > a:80) > > > > > > at > > scala.collection.Iterator$class.foreach(Iterator.scala:631) > > > > > > at > > > > > > > > > scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:161 > > > ) > > > > > > at > > > > > > > > > > > > > scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala: > > > > > 194) > > > > > > at > > > > scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) > > > > > > at > > scala.collection.mutable.HashMap.foreach(HashMap.scala:80) > > > > > > at > > > > > > > > > > > kafka.producer.async.DefaultEventHandler.dispatchSerializedData(De > > > > > fa > > > > > ul > > > > > tEventHandler.scala:100) > > > > > > at > > > > > > > > > > > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler. > > > > > sc > > > > > ala:72) > > > > > > at kafka.producer.Producer.send(Producer.scala:76) > > > > > > at kafka.javaapi.producer.Producer.send(Producer.scala:33) > > > > > > at > > > > > > kafka.application.KafkaProducer.sendMessage(KafkaProducer.java:39) > > > > > > at > > > > > > kafka.test.KafkaProducerTest.main(KafkaProducerTest.java:21) > > > > > > [ WARN] [main 2014-01-27 15:59:44,139] Failed to send producer > > > > > > request with correlation id 2 to broker 2 with data for > > > > > > partitions [mytopic,0] > > > > > > > > > > > > Thanks > > > > > > Bala > > > > > > > > > > > > > > > > > > > >