SledgeHammer created KAFKA-9970:
-----------------------------------

             Summary: Kafka 2.4.0 crashes under Windows when client is restarted
                 Key: KAFKA-9970
                 URL: https://issues.apache.org/jira/browse/KAFKA-9970
             Project: Kafka
          Issue Type: Bug
          Components: core
    Affects Versions: 2.4.0
            Reporter: SledgeHammer


Windows 10 x64 Pro

JDK 11.0.7

Zookeeper 3.6.0

Kafka 2.12-2.4.0

 

I have reproduced this scenario on multiple machines. I do dev on my Windows 
box, so have ZK and K running locally. On my work PC, I'll leave ZK & K running 
in command windows for "ever" at home I'll shutdown when I'm not doing dev.

In either case, Spring Boot client is continuously started and restarted. 
Intermittently Kafka will crash and corrupt the logs (I'm not able to capture 
the K crash exception since it closes), but upon restart, I get the exception 
below. NOTE: file is not in use since I can delete the logs directory and then 
restart. Client is both streams and classic queues.

 

[2020-05-07 13:38:27,782] ERROR Failed to clean up log for 
__consumer_offsets-20 in dir C:\PROGRA~1\kafka_2.12-2.4.0\logs due to 
IOException (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: 
C:\PROGRA~1\kafka_2.12-2.4.0\logs\__consumer_offsets-20\00000000000000000000.timeindex.cleaned
 -> 
C:\PROGRA~1\kafka_2.12-2.4.0\logs\__consumer_offsets-20\00000000000000000000.timeindex.swap:
 The process cannot access the file because it is being used by another process.

at 
java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
 at 
java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
 at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:395)
 at 
java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
 at java.base/java.nio.file.Files.move(Files.java:1421)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:795)
 at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:209)
 at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:497)
 at kafka.log.Log.$anonfun$replaceSegments$4(Log.scala:2267)
 at kafka.log.Log.$anonfun$replaceSegments$4$adapted(Log.scala:2267)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at kafka.log.Log.replaceSegments(Log.scala:2267)
 at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:604)
 at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:529)
 at kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:528)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at kafka.log.Cleaner.doClean(LogCleaner.scala:528)
 at kafka.log.Cleaner.clean(LogCleaner.scala:502)
 at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:371)
 at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:344)
 at 
kafka.log.LogCleaner$CleanerThread.tryCleanFilthiestLog(LogCleaner.scala:324)
 at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:313)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to