Ivan A. Veselovsky created HADOOP-8849:
------------------------------------------
Summary: FileUtil#fullyDelete should grant the target directories
+rwx permissions before trying to delete them
Key: HADOOP-8849
URL: https://issues.apache.org/jira/browse/HADOOP-8849
Project: Hadoop Common
Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
2 improvements are suggested for implementation of methods
org.apache.hadoop.fs.FileUtil.fullyDelete(File) and
org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
1) We should grant +rwx permissions the target directories before trying to
delete them.
The mentioned methods fail to dlete directories that don't have read or execute
permissions.
Actual problem appears if an hdfs-related test is timed out (with a short
timeout like tesns of seconds), and the forked test process is killed, some
directories are left on disk that are not readable and/or executable. This
prevents next tests from being executed properly because these directories
cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail.
So, its recommended to grant the read, write, and execute permissions the
directories whose content is to be deleted.
2) We shouldn't rely upon File#delete() return value, use File#exists()
instead.
FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but
this is not reliable because File#delete() returns true only if the file was
deleted as a result of the #delete() method invocation. E.g. in the following
code
if (f.exists()) { // 1
return f.delete(); // 2
}
if the file f was deleted by another thread or process between calls "1" and
"2", this fragment will return "false", while the file f does not exist upon
the method return.
So, better to write
if (f.exists()) {
f.delete();
return !f.exists();
}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira