Hi, I have bothered you guys a lot so far regarding this and the bug ticket is still open in Kafka JIRA (1194). It seems that log.cleanup.policy=DELETE doesn't close the current file graceful and schedule a delete. My stacktrace for this error is given below:
[2017-06-14 23:07:34,965] INFO Partition [topic1,2] on broker 0: Expanding > ISR for partition topic1-2 from 0,1 to 0,1,2 (kafka.cluster.Partition) > [2017-06-14 23:07:34,990] INFO Partition [topic1,0] on broker 0: Expanding > ISR for partition topic1-0 from 0,1 to 0,1,2 (kafka.cluster.Partition) > [2017-06-14 23:07:35,014] INFO Partition [topic1,1] on broker 0: Expanding > ISR for partition topic1-1 from 0,1 to 0,1,2 (kafka.cluster.Partition) > [2017-06-14 23:07:44,802] INFO Scheduling log segment 0 for log topic1-1 > for deletion. (kafka.log.Log) > [2017-06-14 23:07:44,831] ERROR Uncaught exception in scheduled task > 'kafka-log-retention' (kafka.utils.KafkaScheduler) > kafka.common.KafkaStorageException: Failed to change the log file suffix > from to .deleted for log segment 0 > at kafka.log.LogSegment.kafkaStorageException$1( > LogSegment.scala:340) > at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:342) > at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:981) > at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:971) > at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:673) > at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:673) > at scala.collection.immutable.List.foreach(List.scala:318) > at kafka.log.Log.deleteOldSegments(Log.scala:673) > at kafka.log.Log.deleteRetenionMsBreachedSegments(Log.scala:703) > at kafka.log.Log.deleteOldSegments(Log.scala:697) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply( > LogManager.scala:474) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply( > LogManager.scala:472) > at scala.collection.TraversableLike$WithFilter$$ > anonfun$foreach$1.apply(TraversableLike.scala:772) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at scala.collection.IterableLike$class.foreach(IterableLike. > scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at scala.collection.TraversableLike$WithFilter. > foreach(TraversableLike.scala:771) > at kafka.log.LogManager.cleanupLogs(LogManager.scala:472) > at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp( > LogManager.scala:200) > at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp( > KafkaScheduler.scala:110) > at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57) > at java.util.concurrent.Executors$RunnableAdapter. > call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset( > FutureTask.java:308) > at java.util.concurrent.ScheduledThreadPoolExecutor$ > ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at java.util.concurrent.ScheduledThreadPoolExecutor$ > ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at java.util.concurrent.ThreadPoolExecutor.runWorker( > ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run( > ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.nio.file.FileSystemException: > \tmp\kafka-logs\topic1-1\00000000000000000000.log > -> \tmp\kafka-logs\topic1-1\00000000000000000000.log.deleted: The process > cannot access the file because it is being used by another process. > at sun.nio.fs.WindowsException.translateToIOException( > WindowsException.java:86) > at sun.nio.fs.WindowsException.rethrowAsIOException( > WindowsException.java:97) > at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387) > at sun.nio.fs.WindowsFileSystemProvider.move( > WindowsFileSystemProvider.java:287) > at java.nio.file.Files.move(Files.java:1395) > at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback( > Utils.java:711) > at org.apache.kafka.common.record.FileRecords.renameTo( > FileRecords.java:210) > ... 28 more > Suppressed: java.nio.file.FileSystemException: > \tmp\kafka-logs\topic1-1\00000000000000000000.log -> > \tmp\kafka-logs\topic1-1\00000000000000000000.log.deleted: The process > cannot access the file because it is being used by another process. > at sun.nio.fs.WindowsException.translateToIOException( > WindowsException.java:86) > at sun.nio.fs.WindowsException.rethrowAsIOException( > WindowsException.java:97) > at sun.nio.fs.WindowsFileCopy. > move(WindowsFileCopy.java:301) > at sun.nio.fs.WindowsFileSystemProvider.move( > WindowsFileSystemProvider.java:287) > at java.nio.file.Files.move(Files.java:1395) > at org.apache.kafka.common.utils. > Utils.atomicMoveWithFallback(Utils.java:708) > ... 29 more One of my broker (same for all except the broker.id) configs: log.retention.minutes=10 log.retention.bytes=26214400 log.segment.bytes=10485760 log.retention.check.interval.ms=240000 offsets.retention.check.interval.ms=300000 offsets.retention.minutes=10 log.cleanup.policy=delete I tried to keep the policy as [compact,delete] but it seems like it's only doing compact (not delete). The documentation says that comma separated values are allowed. I think the documentation should just clarifies that Kafka cannot delete the files, but can keep the size compact according to the specification in server.properties? If anyone has made the DELETE work, could you please explain what you have done including any workaround? Kindest Regards,