Hi, Consumers are continuously logging the following INFO messages: "[Consumer clientId=consumer-****-5, groupId=****] Node -2 disconnected." "[Consumer clientId=consumer-****-5, groupId=****] Node -1 disconnected." "[Consumer clientId=consumer-****-5, groupId=****] Node 2147483645 disconnected."
This happens because SASL is now enabled on the Kafka cluster, and the old consumers are incompatible. To resolve this, consumers need to be recreated with the correct port and SASL configurations. Is there a way to automatically handle this? Specifically, can I trigger an exception during consumer.poll() to fetch updated configurations and recreate the consumer? On Wed, 4 Sept 2024 at 12:53, Upendra Yadav <upendra1...@gmail.com> wrote: > HI, > > We have 15+ components that are using the same kafka cluster to consume > multiple topic-partitions. > Kafka cluster is managed by another team and they provided an API to fetch > required kafka client configs(broker, sasl etc) to connect. > > We used to fetch these configs on startup as well have hourly > schedules to fetch, compare and apply. > Now, if they enabled SASL auth on the kafka cluster and changed required > config, we only get to know through hourly schedules. > So, instead of waiting for ~1hrs, if our consumer throws such exception > then we can fetch new configs and create a fresh consumer. > > > On Tue, 3 Sept 2024 at 18:29, Ömer Şiar Baysal <osiarbay...@gmail.com> > wrote: > >> Hi, >> >> Why do you even bother catching the exception if you could change the >> connection details for the clients? >> >> You would need to create a new listener, each client then required to be >> reconfigured to connect that, introducing a code/configuration change >> anyways. >> >> Good luck, >> Ömer Şiar Baysal >> >> >> On Tue, Sep 3, 2024, 14:29 Upendra Yadav <upendra1...@gmail.com> wrote: >> >> > Hello, >> > >> > Recently, I've been working on enabling SASL authentication on my Kafka >> > cluster. >> > During this process, I want the already running Kafka consumers to >> > automatically disconnect, update their configurations with the new SASL >> > settings and port, and then reconnect. >> > >> > However, when I enable SASL authentication on the Kafka cluster, my >> > consumers get stuck in the poll(9000) call and continuously generate the >> > following logs: >> > >> > Sep 03, 2024 3:56:17 PM org.apache.kafka.clients.NetworkClient >> > > cancelInFlightRequests >> > > INFO: [Consumer clientId=consumer-123-1, groupId=123] Cancelled >> in-flight >> > > METADATA request with correlation id 37 due to node -3 being >> disconnected >> > > (elapsed time since creation: 304ms, elapsed time since send: 304ms, >> > > request timeout: 30000ms) >> > > Sep 03, 2024 3:56:17 PM >> > > org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater >> > > handleServerDisconnect >> > > WARNING: [Consumer clientId=consumer-123-1, groupId=123] Bootstrap >> broker >> > > 172.20.41.201:9092 (id: -3 rack: null) disconnected >> > > Sep 03, 2024 3:56:19 PM org.apache.kafka.clients.NetworkClient >> > > handleDisconnections >> > > INFO: [Consumer clientId=consumer-123-1, groupId=123] Node -1 >> > disconnected. >> > >> > >> > Is there a way for the consumer to detect these issues and throw an >> > exception, allowing me to set the correct configurations to recreate the >> > consumer and reconnect? >> > >> > Kafka broker and client - 3.7.1 >> > >> >