Yes, this is something that we could consider fixing in Kafka itself.
Pretty much all timeouts can be customized if the defaults for the
OS/network are larger than make sense for the system. And given the large
default values for some of these timeouts, we probably don't want to rely
on the default
Makes sense, thanks Ewen.
Is this something we could consider fixing in Kafka itself? I don't think
the producer is necessarily doing anything wrong, but the end result is
certainly very surprising behavior. It would also be nice not to have to
coordinate request timeouts, retries, and the max blo
Without having dug back into the code to check, this sounds right.
Connection management just fires off a request to connect and then
subsequent poll() calls will handle any successful/failed connections. The
timeouts wrt requests are handled somewhat differently (the connection
request isn't expli
Hello,
Is it correct that producers do not fail new connection establishment when
it exceeds the request timeout?
Running on AWS, we've encountered a problem where certain very low volume
producers end up with metadata that's sufficiently stale that they attempt
to establish a connection to a bro