[
https://issues.apache.org/jira/browse/KAFKA-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16598540#comment-16598540
]
Christoph Schmidt commented on KAFKA-7278:
------------------------------------------
This ticket has raised questions in the comments of ancient KAFKA-1194 (which
knows three old PRs) - does it by chance fix that issue, too? Root cause over
there is that rename-while-still-open blows up under windows, with the only
available workaround being to completely disable the log cleaner.
> replaceSegments() should not call asyncDeleteSegment() for segments which
> have been removed from segments list
> --------------------------------------------------------------------------------------------------------------
>
> Key: KAFKA-7278
> URL: https://issues.apache.org/jira/browse/KAFKA-7278
> Project: Kafka
> Issue Type: Improvement
> Reporter: Dong Lin
> Assignee: Dong Lin
> Priority: Major
> Fix For: 1.1.2, 2.0.1, 2.1.0
>
>
> Currently Log.replaceSegments() will call `asyncDeleteSegment(...)` for every
> segment listed in the `oldSegments`. oldSegments should be constructed from
> Log.segments and only contain segments listed in Log.segments.
> However, Log.segments may be modified between the time oldSegments is
> determined to the time Log.replaceSegments() is called. If there are
> concurrent async deletion of the same log segment file, Log.replaceSegments()
> will call asyncDeleteSegment() for a segment that does not exist and Kafka
> server may shutdown the log directory due to NoSuchFileException.
> This is likely the root cause of
> https://issues.apache.org/jira/browse/KAFKA-6188.
> Given the understanding of the problem, we should be able to fix the issue by
> only deleting segment if the segment can be found in Log.segments.
>
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)