Hi,
  From kafka's document I found:
  "The Kafka cluster retains all published records—whether or not they have 
been consumed—using a configurable retention period. For example, if the 
retention policy is set to two days, then for the two days after a record is 
published, it is available for consumption, after which it will be discarded to 
free up space. Kafka's performance is effectively constant with respect to data 
size so storing data for a long time is not a problem."
  Is this mean once the retention hour is over all the messages will be 
discarded no matter consumed or not ? 
  There is a risk of missing logs for some consumers which are not subscribing 
messages in time.
  Now the log retention policy just contain "log.retention.hours" and 
"log.retention.bytes" is not enough.
  I hope kafka can consider the factor consumer's offset when purge the log 
segment file.
  Is this issure can be imporved in the future release?


  Good Day, Thanks!




Reply via email to