Hi Team, I have a requirement of reading real time data using kafka and write to cassandra. For this I am using SimpleConsumer to read data from Kafka topics and writing into Cassandra. I am maintaining offsets of topics in my log files. The issue is that after few days like 3-4 days my cosumer code does not read data from kafka topics and produce below log output :
20:01:17,068 INFO NposKafkaConsumer:48 - Taking partition from application properties 20:01:17,482 DEBUG BlockingChannel:52 - Created socket with SO_TIMEOUT = 100000 (requested 100000), SO_RCVBUF = 65536 (requested 65536), SO_SNDBUF = 64512 (requested -1). 20:01:17,545 DEBUG SimpleConsumer:52 - Disconnecting from <IP address>:9092 20:01:17,578 DEBUG NposKafkaConsumer:113 - NposKafkaConsumer.run() method Inside while loop :: Value of max_reads::1 20:01:17,662 DEBUG BlockingChannel:52 - Created socket with SO_TIMEOUT = 100000 (requested 100000), SO_RCVBUF = 65536 (requested 65536), SO_SNDBUF = 64512 (requested -1). 20:01:17,804 DEBUG NposKafkaConsumer:193 - numRead::0 Sleeping 20:01:18,804 DEBUG NposKafkaConsumer:113 - NposKafkaConsumer.run() method Inside while loop :: Value of max_reads::1 20:01:18,826 DEBUG NposKafkaConsumer:193 - numRead::0 Sleeping 20:01:19,827 DEBUG NposKafkaConsumer:113 - NposKafkaConsumer.run() method Inside while loop :: Value of max_reads::1 20:01:19,852 DEBUG NposKafkaConsumer:193 - numRead::0 Sleeping NposKafkaConsumer is my main SimpleCosumer class. But, when I restart the kafka process by incrementing the offset by one then again my code starts running fine for next few days. Can you please help me how I can solve this and where I am going wrong ? Thanks & Regards, Pankaj Ojha