Evelyn Bayes created KAFKA-8522:
-----------------------------------

             Summary: Tombstones can survive forever
                 Key: KAFKA-8522
                 URL: https://issues.apache.org/jira/browse/KAFKA-8522
             Project: Kafka
          Issue Type: Bug
          Components: log cleaner
            Reporter: Evelyn Bayes


This is a bit grey zone as to whether it's a "bug" but it is certainly 
unintended behaviour.

 

Under specific conditions tombstones effectively survive forever:
 * Small amount of throughput;

 * min.cleanable.dirty.ratio near or at 0; and

 * Other parameters at default.

What  happens is all the data continuously gets cycled into the oldest segment. 
Old records get compacted away, but the new records continuously update the 
timestamp of the oldest segment reseting the countdown for deleting tombstones.

So tombstones build up in the oldest segment forever.

 

While you could "fix" this by reducing the segment size, this can be 
undesirable as a sudden change in throughput could cause a dangerous number of 
segments to be created.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to