Martin Bukatovic created HADOOP-10813:
-----------------------------------------

             Summary: Define general filesystem exceptions (usable by any HCFS)
                 Key: HADOOP-10813
                 URL: https://issues.apache.org/jira/browse/HADOOP-10813
             Project: Hadoop Common
          Issue Type: Improvement
          Components: fs
    Affects Versions: 2.2.0
            Reporter: Martin Bukatovic
            Priority: Minor


While Hadoop defines filesystem API which makes possible to use different
filesystem implementation than HDFS (aka HCFS), we are missing HCFS
exceptions for some failures wrt to namenode federation.

For namenode federation, one can specify different namenode like this:
{{hdfs://namenode_hostname/some/path}}. So when the given namenode doesn't
exist, {{UnknownHostException}} is thrown:

{noformat}
$ hadoop fs -mkdir -p hdfs://bugcheck/foo/bar
-mkdir: java.net.UnknownHostException: bugcheck
Usage: hadoop fs [generic options] -mkdir [-p] <path> ...
{noformat}

Which is ok for HDFS, but there are other hadoop filesystem with different
implementation and raising {{UnknownHostException}} doesn't make sense for
them. For example the following path: {{glusterfs://bugcheck/foo/bar}} points
to file {{/foo/bar}} on GlusterFS volume named {{bugcheck}}. That said, the
meaning is the same compared to HDFS, both namenode hostname and glusterfs
volume specifies different filesystem tree available for Hadoop.

Would it make sense to define general HCFS exception which would wrap such
cases so that it would be possible to fail in the same way when given
filesystem tree is not available/defined, not matter which hadoop filesystem
is used?




--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to