[jira] [Created] (HDFS-4166) Add support for scheduled automatic snapshots
Suresh Srinivas created HDFS-4166: - Summary: Add support for scheduled automatic snapshots Key: HDFS-4166 URL: https://issues.apache.org/jira/browse/HDFS-4166 Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Affects Versions: Snapshot (HDFS-2802) Reporter: Suresh Srinivas Assignee: Suresh Srinivas This jira will track the work related to supporting automatic scheduled snapshots. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4167) Add support for restoring/rollbacking to a snapshot
Suresh Srinivas created HDFS-4167: - Summary: Add support for restoring/rollbacking to a snapshot Key: HDFS-4167 URL: https://issues.apache.org/jira/browse/HDFS-4167 Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Affects Versions: Snapshot (HDFS-2802) Reporter: Suresh Srinivas Assignee: Jing Zhao This jira tracks work related to restoring a directory/file to a snapshot. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4168) TestDFSUpgradeFromImage fails in branch-1
Tsz Wo (Nicholas), SZE created HDFS-4168: Summary: TestDFSUpgradeFromImage fails in branch-1 Key: HDFS-4168 URL: https://issues.apache.org/jira/browse/HDFS-4168 Project: Hadoop HDFS Issue Type: Bug Components: name-node Reporter: Tsz Wo (Nicholas), SZE Assignee: Jing Zhao {noformat} java.lang.NullPointerException at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:2212) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removePathAndBlocks(FSNamesystem.java:2225) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedDelete(FSDirectory.java:645) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:833) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1024) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:841) at org.apache.hadoop.hdfs.server.namenode.FSImage.doUpgrade(FSImage.java:402) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:367) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:420) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:388) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:285) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:546) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1444) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:278) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:173) at org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromImage(TestDFSUpgradeFromImage.java:185) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4169) Add per-disk latency metrics to DataNode
Todd Lipcon created HDFS-4169: - Summary: Add per-disk latency metrics to DataNode Key: HDFS-4169 URL: https://issues.apache.org/jira/browse/HDFS-4169 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Affects Versions: 3.0.0 Reporter: Todd Lipcon Currently, if one of the drives on the DataNode is slow, it's hard to determine what the issue is. This can happen due to a failing disk, bad controller, etc. It would be preferable to expose per-drive MXBeans (or tagged metrics) with latency statistics about how long reads/writes are taking. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-4168) TestDFSUpgradeFromImage fails in branch-1
[ https://issues.apache.org/jira/browse/HDFS-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE resolved HDFS-4168. -- Resolution: Fixed Fix Version/s: 1.2.0 I have committed this. Thanks, Jing! > TestDFSUpgradeFromImage fails in branch-1 > - > > Key: HDFS-4168 > URL: https://issues.apache.org/jira/browse/HDFS-4168 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Jing Zhao > Fix For: 1.2.0 > > Attachments: HDFS-4168.b1.001.patch > > > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:2212) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removePathAndBlocks(FSNamesystem.java:2225) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedDelete(FSDirectory.java:645) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:833) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1024) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:841) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.doUpgrade(FSImage.java:402) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:367) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:420) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:388) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:285) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:546) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1444) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:278) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:173) > at > org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromImage(TestDFSUpgradeFromImage.java:185) > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Create a data block in a certain data node
Sorry, I found the previous one is full of typo and hard to understand. I will clarify my question: If I have got a inputstream in a datanode (and this stream contains exactly 64MB's data which I want to store it as a data block of the HDFS in that datanode. More specifically, I want to use this block to replace another exsiting block of one file that already stored in the HDFS), can anyone tell me which is the easiest way to do this? Thanks, On Thu, Nov 8, 2012 at 5:15 PM, Vivi Lang wrote: > Hi all, > > If I want have get a inputstream in a datanode (and this should contains > exactly 64MB data which I want to replace the content of an existing data > block to store that content), can anyone tell me which is the easiest way > to do this? Moreover, I would like to write this stream in the local > datanode. > > Thanks, >
[jira] [Created] (HDFS-4170) Add snapshot information to INodesInPath
Tsz Wo (Nicholas), SZE created HDFS-4170: Summary: Add snapshot information to INodesInPath Key: HDFS-4170 URL: https://issues.apache.org/jira/browse/HDFS-4170 Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Reporter: Tsz Wo (Nicholas), SZE Assignee: Tsz Wo (Nicholas), SZE For snapshot paths, the snapshot information is required for accessing the snapshot. For non-snapshot paths, the latest snapshot found in the path is required for maintaining diffs for modification. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-4048) Use ERROR instead of INFO for volume failure logs
[ https://issues.apache.org/jira/browse/HDFS-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins resolved HDFS-4048. --- Resolution: Fixed > Use ERROR instead of INFO for volume failure logs > - > > Key: HDFS-4048 > URL: https://issues.apache.org/jira/browse/HDFS-4048 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.0.0-alpha >Reporter: Stephen Chu >Assignee: Stephen Chu > Fix For: 2.0.3-alpha > > Attachments: HDFS-4048.patch.branch-2, HDFS-4048.patch.branch-2.2, > HDFS-4048.patch.trunk, HDFS-4048.patch.trunk.2, HDFS-4048.patch.trunk.3 > > > I misconfigured the permissions of the DataNode data directories (they were > owned by root, instead of hdfs). > I wasn't aware of this misconfiguration until a few days later. I usually > search through the logs for WARN and ERROR but didn't find messages at these > levels that indicated volume failure. > After more carefully reading the logs, I found: > {code} > 2012-10-01 13:07:10,440 INFO org.apache.hadoop.hdfs.server.common.Storage: > Cannot access storage directory /data/4/dfs/dn > 2012-10-01 13:07:10,440 INFO org.apache.hadoop.hdfs.server.common.Storage: > Storage directory /data/4/dfs/dn does not exist. > {code} > I think we should bump the log level to ERROR. This will make the problem > more visible to users. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HDFS-4048) Use ERROR instead of INFO for volume failure logs
[ https://issues.apache.org/jira/browse/HDFS-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins reopened HDFS-4048: --- Forgot to mention, the previous test failure was unrelated. Since we're just changing loglevels no new test is necessary. I've committed this and merge to branch-2. Thanks Stephen! > Use ERROR instead of INFO for volume failure logs > - > > Key: HDFS-4048 > URL: https://issues.apache.org/jira/browse/HDFS-4048 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.0.0-alpha >Reporter: Stephen Chu >Assignee: Stephen Chu > Fix For: 2.0.3-alpha > > Attachments: HDFS-4048.patch.branch-2, HDFS-4048.patch.branch-2.2, > HDFS-4048.patch.trunk, HDFS-4048.patch.trunk.2, HDFS-4048.patch.trunk.3 > > > I misconfigured the permissions of the DataNode data directories (they were > owned by root, instead of hdfs). > I wasn't aware of this misconfiguration until a few days later. I usually > search through the logs for WARN and ERROR but didn't find messages at these > levels that indicated volume failure. > After more carefully reading the logs, I found: > {code} > 2012-10-01 13:07:10,440 INFO org.apache.hadoop.hdfs.server.common.Storage: > Cannot access storage directory /data/4/dfs/dn > 2012-10-01 13:07:10,440 INFO org.apache.hadoop.hdfs.server.common.Storage: > Storage directory /data/4/dfs/dn does not exist. > {code} > I think we should bump the log level to ERROR. This will make the problem > more visible to users. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira