Hi Sachin,

Try adding --from-beginning to your console consumer to view the 
historically produced data. By default the console consumer starts from 
the last offset.

Tom Aley
thomas.a...@ibm.com



From:   Sachin Nikumbh <saniku...@yahoo.com.INVALID>
To:     Kafka Users <users@kafka.apache.org>
Date:   17/07/2019 16:01
Subject:        [EXTERNAL] Kafka logs are getting deleted too soon



Hi all,
I have ~ 96GB of data in files that I am trying to get into a Kafka 
cluster. I have ~ 11000 keys for the data and I have created 15 partitions 
for my topic. While my producer is dumping data in Kafka, I have a console 
consumer that shows me that kafka is getting the data. The producer runs 
for a few hours before it is done. However, at this point, when I run the 
console consumer, it does not fetch any data. If I look at the logs 
directory, .log files for all the partitions are of 0 byte size. 
If I am not wrong, the default value for log.retention.bytes is -1 which 
means there is no size limit for the logs/partition. I do want to make 
sure that the value for this setting is per partition. Given that the 
default time based retention is 7 days, I am failing to understand why the 
logs are getting deleted. The other thing that confuses me is that when I 
use kafka.tools.GetOffsetShell, it shows me large enough values for all 
the 15 partitions for offsets.
Can someone please help me understand why I don't see logs and why 
is kafka.tools.GetOffsetShell making me believe there is data.
ThanksSachin


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

Reply via email to