Thanks Ewan, but I wonder why is it not fixed yet, seems like a very easy fix -
make metadata fetch as part of the code executed in Future and have a timeout.
Basically there is more then one issue and both are very critical.
Sent from my iPhone
> On Aug 3, 2016, at 01:04, Ewen Cheslack-Postava
Hi Ewen,
The producer doesn't have the same issue, right? It will eventually throw a
TimeoutException:
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L515
Ismael
On Wed, Aug 3, 2016 at 6:04 AM, Ewen Cheslack-Postava
wrote:
This is unfortunate, but a known issue. See
https://issues.apache.org/jira/browse/KAFKA-1894 The producer suffers from
a similar issue with its initial metadata fetch on the first send().
-Ewen
On Thu, Jul 28, 2016 at 12:46 PM, Oleg Zhurakousky <
ozhurakou...@hortonworks.com> wrote:
> Also, read
Also, reading java docs for KafkaConsumer#poll(timeout) states:
@param timeout The time, in milliseconds, spent waiting in poll if data is not
available. If 0, returns
*immediately with any records that are available now. Must not
be negative.
Yet even setting it to 0 brings no
So I have KafkaConsumer that is deliberately set with server properties
pointing to non-running broker.
Doing KafkaConsumer.poll(100) blocks infinitely even though ‘fetch.max.wait.ms’
is set to 1 millisecond.
Basically I am trying to fail when connection is not possible.
Any idea how to accompli