[jira] Resolved: (HDFS-578) Support for using server default values for blockSize and replication when creating a file
[ https://issues.apache.org/jira/browse/HDFS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HDFS-578. -- Resolution: Fixed Fix Version/s: 0.21.0 Release Note: Adds HDFS support for new FileSystem method for clients to get server defaults. Contributed by Kan Zhang. Hadoop Flags: [Reviewed] (was: [Reviewed, Incompatible change]) > Support for using server default values for blockSize and replication when > creating a file > -- > > Key: HDFS-578 > URL: https://issues.apache.org/jira/browse/HDFS-578 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs client, name-node >Reporter: Kan Zhang >Assignee: Kan Zhang > Fix For: 0.21.0 > > Attachments: h578-13.patch, h578-14.patch, h578-16.patch > > > This is a sub-task of HADOOP-4952. This improvement makes it possible for a > client to specify that it wants to use the server default values for > blockSize and replication params when creating a file. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-602) Atempt to make a directory under an existing file on DistributedFileSystem should throw an FileAlreadyExitsException instead of FileNotFoundException
Atempt to make a directory under an existing file on DistributedFileSystem should throw an FileAlreadyExitsException instead of FileNotFoundException - Key: HDFS-602 URL: https://issues.apache.org/jira/browse/HDFS-602 Project: Hadoop HDFS Issue Type: Bug Reporter: Boris Shkolnik Atempt to make a directory under an existing file on DistributedFileSystem should throw an FileAlreadyExitsException instead of FileNotFoundException. Also we should unwrap this exception from RemoteException -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HDFS-595) FsPermission tests need to be updated for new octal configuration parameter from HADOOP-6234
[ https://issues.apache.org/jira/browse/HDFS-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HDFS-595. -- Resolution: Fixed I committed the changes. Thanks Jakob. > FsPermission tests need to be updated for new octal configuration parameter > from HADOOP-6234 > > > Key: HDFS-595 > URL: https://issues.apache.org/jira/browse/HDFS-595 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs client >Reporter: Jakob Homan >Assignee: Jakob Homan > Fix For: 0.21.0 > > Attachments: HDFS-595.patch, HDFS-595.patch > > > HADOOP-6234 changed the format of the configuration umask value. Tests that > use this value need to be updated. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-603) mReplica related classes cannot be accessed
mReplica related classes cannot be accessed --- Key: HDFS-603 URL: https://issues.apache.org/jira/browse/HDFS-603 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Affects Versions: Append Branch Reporter: Tsz Wo (Nicholas), SZE Fix For: Append Branch Replica related classes cannot be accessed above FSDatasetInterface. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HDFS-576) Extend Block report to include under-construction replicas
[ https://issues.apache.org/jira/browse/HDFS-576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko resolved HDFS-576. -- Resolution: Fixed Hadoop Flags: [Reviewed] I committed this to the append branch. > Extend Block report to include under-construction replicas > -- > > Key: HDFS-576 > URL: https://issues.apache.org/jira/browse/HDFS-576 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: data-node, name-node >Affects Versions: Append Branch >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko > Fix For: Append Branch > > Attachments: BlockReport.htm, NewBlockReport.patch, > NewBlockReport.patch > > > Current data-node block reports report only finalized (in append terminology) > blocks. Data-nodes should report all block replicas except for the temporary > ones so that clients could read from incomplete replicas and to make block > recovery possible. > The attached design document goes into more details of the new block reports. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-604) Block report processing for append
Block report processing for append -- Key: HDFS-604 URL: https://issues.apache.org/jira/browse/HDFS-604 Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Affects Versions: Append Branch Reporter: Konstantin Shvachko Assignee: Konstantin Shvachko Fix For: Append Branch Implement new block report processing on the name-node as stated in the append design and HDFS-576. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-605) There's not need to run fault-inject tests by 'run-test-hdfs-with-mr' target
There's not need to run fault-inject tests by 'run-test-hdfs-with-mr' target Key: HDFS-605 URL: https://issues.apache.org/jira/browse/HDFS-605 Project: Hadoop HDFS Issue Type: Improvement Components: build, test Reporter: Konstantin Boudnik It turns out that running fault injection tests doesn't make any sense when {{run-test-hdfs-with-mr}} target is being executed. Thus, {{build.xml}} has to be modified. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-606) ConcurrentModificationException in invalidateCorruptReplicas()
ConcurrentModificationException in invalidateCorruptReplicas() -- Key: HDFS-606 URL: https://issues.apache.org/jira/browse/HDFS-606 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.21.0 Reporter: Konstantin Shvachko Assignee: Konstantin Shvachko Fix For: 0.21.0 {{BlockManager.invalidateCorruptReplicas()}} iterates over DatanodeDescriptor-s while removing corrupt replicas from the descriptors. This causes {{ConcurrentModificationException}} if there is more than one replicas of the block. I ran into this exception debugging different scenarios in append, but it should be fixed in the trunk too. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
How to define the path of HDFS?
Hi everyone, when I run the following code: ObjectInputStream in = new ObjectInputStream(new FileInputStream("hdfs://localhost:9000/myDir/trajectory/test.obj")); it throws an error that the directory is not exist, how can I define the its path ? Any suggestion is appreciate ! Thanks, Austin