Hello Apache Kafka community, Just noticed that : - message is successfully published using new 0.8.2.1 producer - and then Kafka is stopped
next attempt to publish message using same instance of new producer hangs forever, and following stacktrace gets logged repeatedly: [WARN ] [o.a.kafka.common.network.Selector] [] Error in I/O with localhost/ 127.0.0.1 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_31] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) ~[na:1.8.0_31] at org.apache.kafka.common.network.Selector.poll(Selector.java:238) ~[kafka-clients-0.8.2.1.jar:na] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) [kafka-clients-0.8.2.1.jar:na] at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) [kafka-clients-0.8.2.1.jar:na] at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) [kafka-clients-0.8.2.1.jar:na] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_31] I expect producer to respect timeout settings even in this connection lost scenario. Is this a known bug? Is there something I can do/configure as a workaround? Kind regards, Stevo Slavic.