Maciej Bryński created KAFKA-13831:
--------------------------------------

             Summary: Kafka retention can use old value of retention.ms
                 Key: KAFKA-13831
                 URL: https://issues.apache.org/jira/browse/KAFKA-13831
             Project: Kafka
          Issue Type: Bug
          Components: core
    Affects Versions: 2.8.0
            Reporter: Maciej Bryński


Hi,
I think I have found a bug in Kafka retention.
I'm using Confluent Platform 6.2.2 (Kafka 2.8.0).
I changed retention.ms for topic twice:
1. From 432000000ms to 180000ms (to clean the topic)
2. Back to 432000000ms.

After second change retention thread is still using 180000ms value.
Only broker restart fixes this issue.

Logs:
{code}
server.log.2022-04-15-03:[2022-04-15 03:29:08,445] INFO [Log 
partition=pm.hwe.lte.lcell.inc.intrarat.ho.x2.raw-0, dir=/data/kafka] Deleting 
segment LogSegment(baseOffset=1029819055, size=22996644, 
lastModifiedTime=1650007299179, largestRecordTimestamp=Some(1650007299178)) due 
to retention time 180000ms breach based on the largest record timestamp in the 
segment (kafka.log.Log)
{code}

Topic description:
{code}
kafka-topics --bootstrap-server localhost:9092 --describe --topic 
pm.hwe.lte.lcell.inc.intrarat.ho.x2.raw
Topic: pm.hwe.lte.lcell.inc.intrarat.ho.x2.raw TopicId: svLdGbOaRXmdkHGsdlaPUQ 
PartitionCount: 1 ReplicationFactor: 3 Configs: 
min.insync.replicas=2,segment.bytes=1073741824,retention.ms=432000000,segment.ms=86400000
{code}




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to