[ https://issues.apache.org/jira/browse/FLINK-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741665#comment-15741665 ]
ASF GitHub Bot commented on FLINK-5300: --------------------------------------- Github user StephanEwen commented on the issue: https://github.com/apache/flink/pull/2970 Looks good to me. I would actually suggest to add two tests, one in `flink-core` based on the local file system, and one in `flink-fs-tests`, based on HDFS. That way we make sure that there are no "unexpected behaviors", like some default file status always included (`.` or `..` or whatever). > FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete > non-empty directory > ------------------------------------------------------------------------------------------------- > > Key: FLINK-5300 > URL: https://issues.apache.org/jira/browse/FLINK-5300 > Project: Flink > Issue Type: Improvement > Components: State Backends, Checkpointing > Affects Versions: 1.2.0, 1.1.3 > Reporter: Till Rohrmann > Assignee: Till Rohrmann > Priority: Critical > > Flink's behaviour to delete {{FileStateHandles}} and closing > {{FsCheckpointStateOutputStream}} always triggers a delete operation on the > parent directory. Often this call will fail because the directory still > contains some other files. > A user reported that the SRE of their Hadoop cluster noticed this behaviour > in the logs. It might be more system friendly if we first checked whether the > directory is empty or not. This would prevent many error message to appear in > the Hadoop logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)