Hi all,
I have ~ 96GB of data in files that I am trying to get into a Kafka cluster. I 
have ~ 11000 keys for the data and I have created 15 partitions for my topic. 
While my producer is dumping data in Kafka, I have a console consumer that 
shows me that kafka is getting the data. The producer runs for a few hours 
before it is done. However, at this point, when I run the console consumer, it 
does not fetch any data. If I look at the logs directory, .log files for all 
the partitions are of 0 byte size. 
If I am not wrong, the default value for log.retention.bytes is -1 which means 
there is no size limit for the logs/partition. I do want to make sure that the 
value for this setting is per partition. Given that the default time based 
retention is 7 days, I am failing to understand why the logs are getting 
deleted. The other thing that confuses me is that when I use 
kafka.tools.GetOffsetShell, it shows me large enough values for all the 15 
partitions for offsets.
Can someone please help me understand why I don't see logs and why is 
kafka.tools.GetOffsetShell making me believe there is data.
ThanksSachin

Reply via email to