Hadoop-Hdfs-trunk - Build # 656 - Still Failing
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/656/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 795868 lines...] [junit] at java.lang.Thread.run(Thread.java:662) [junit] [junit] 2011-05-04 12:43:08,222 INFO datanode.DataNode (DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2011-05-04 12:43:08,222 WARN datanode.DataNode (DataNode.java:offerService(1065)) - BPOfferService for block pool=BP-438309886-127.0.1.1-1304512987024 received exception:java.lang.InterruptedException [junit] 2011-05-04 12:43:08,222 WARN datanode.DataNode (DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:60686, storageID=DS-101149630-127.0.1.1-60686-1304512987544, infoPort=42802, ipcPort=42475, storageInfo=lv=-35;cid=testClusterID;nsid=1651019765;c=0) ending block pool service for: BP-438309886-127.0.1.1-1304512987024 [junit] 2011-05-04 12:43:08,223 INFO datanode.DataBlockScanner (DataBlockScanner.java:removeBlockPool(277)) - Removed bpid=BP-438309886-127.0.1.1-1304512987024 from blockPoolScannerMap [junit] 2011-05-04 12:43:08,223 INFO datanode.DataNode (FSDataset.java:shutdownBlockPool(2560)) - Removing block pool BP-438309886-127.0.1.1-1304512987024 [junit] 2011-05-04 12:43:08,223 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-05-04 12:43:08,223 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-05-04 12:43:08,223 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1041)) - Shutting down DataNode 0 [junit] 2011-05-04 12:43:08,223 WARN datanode.DirectoryScanner (DirectoryScanner.java:shutdown(297)) - DirectoryScanner: shutdown has been called [junit] 2011-05-04 12:43:08,224 INFO datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:startNewPeriod(591)) - Starting a new period : work left in prev period : 100.00% [junit] 2011-05-04 12:43:08,224 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] 2011-05-04 12:43:08,224 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 55682 [junit] 2011-05-04 12:43:08,225 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 55682: exiting [junit] 2011-05-04 12:43:08,228 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] 2011-05-04 12:43:08,228 WARN datanode.DataNode (DataXceiverServer.java:run(143)) - 127.0.0.1:45849:DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:136) [junit] at java.lang.Thread.run(Thread.java:662) [junit] [junit] 2011-05-04 12:43:08,228 INFO datanode.DataNode (DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2011-05-04 12:43:08,228 WARN datanode.DataNode (DataNode.java:offerService(1065)) - BPOfferService for block pool=BP-438309886-127.0.1.1-1304512987024 received exception:java.lang.InterruptedException [junit] 2011-05-04 12:43:08,228 WARN datanode.DataNode (DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:45849, storageID=DS-1819675380-127.0.1.1-45849-1304512987424, infoPort=42997, ipcPort=55682, storageInfo=lv=-35;cid=testClusterID;nsid=1651019765;c=0) ending block pool service for: BP-438309886-127.0.1.1-1304512987024 [junit] 2011-05-04 12:43:08,229 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 55682 [junit] 2011-05-04 12:43:08,329 INFO datanode.DataBlockScanner (DataBlockScanner.java:removeBlockPool(277)) - Removed bpid=BP-438309886-127.0.1.1-1304512987024 from blockPoolScannerMap [junit] 2011-05-04 12:43:08,329 INFO datanode.DataNode (FSDataset.java:shutdownBlockPool(2560)) - Removing block pool BP-438309886-127.0.1.1-1304512987024 [junit] 2011-05-04 12:43:08,329 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-05-04 12:43:08,329 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-05-04 12:43:08,330 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.Int
[jira] [Created] (HDFS-1887) If DataNode gets killed after 'data.dir' is created, but before LAYOUTVERSION is written to the storage file. The further restarts of the DataNode, an EOFException will be
If DataNode gets killed after 'data.dir' is created, but before LAYOUTVERSION is written to the storage file. The further restarts of the DataNode, an EOFException will be thrown while reading the storage file. --- Key: HDFS-1887 URL: https://issues.apache.org/jira/browse/HDFS-1887 Project: Hadoop HDFS Issue Type: Bug Components: data-node Affects Versions: 0.21.0, 0.20.1, 0.23.0 Environment: Linux Reporter: sravankorumilli Priority: Minor Assume DataNode gets killed after 'data.dir' is created, but before LAYOUTVERSION is written to the storage file. The further restarts of the DataNode, an EOFException will be thrown while reading the storage file. The DataNode cannot be restarted successfully until the 'data.dir' is deleted. These are the corresponding logs:- 2011-05-02 19:12:19,389 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.EOFException at java.io.RandomAccessFile.readInt(RandomAccessFile.java:725) at org.apache.hadoop.hdfs.server.datanode.DataStorage.isConversionNeeded(DataStorage.java:203) at org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:697) at org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:62) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:476) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:116) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:336) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:260) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:237) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1440) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1393) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1407) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1552) Our Hadoop cluster is managed by a cluster management software which tries to eliminate any manual intervention in setting up & managing the cluster. But in the above mentioned scenario, it requires manual intervention to recover the DataNode. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1888) MiniDFSCluster#corruptBlockOnDatanodes() access must be public for MapReduce contrib raid
MiniDFSCluster#corruptBlockOnDatanodes() access must be public for MapReduce contrib raid - Key: HDFS-1888 URL: https://issues.apache.org/jira/browse/HDFS-1888 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.23.0 Reporter: Suresh Srinivas Assignee: Suresh Srinivas Fix For: 0.23.0 Attachments: HDFS-1888.patch HDFS-1052 during code merge the method was made package private. It needs to be public for access in MapReduce contrib raid. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1888) MiniDFSCluster#corruptBlockOnDatanodes() access must be public for MapReduce contrib raid
[ https://issues.apache.org/jira/browse/HDFS-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HDFS-1888. --- Resolution: Fixed I committed the patch. > MiniDFSCluster#corruptBlockOnDatanodes() access must be public for MapReduce > contrib raid > - > > Key: HDFS-1888 > URL: https://issues.apache.org/jira/browse/HDFS-1888 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0 >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Fix For: 0.23.0 > > Attachments: HDFS-1888.patch > > > HDFS-1052 during code merge the method was made package private. It needs to > be public for access in MapReduce contrib raid. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1889) incorrect path in start/stop dfs script
incorrect path in start/stop dfs script --- Key: HDFS-1889 URL: https://issues.apache.org/jira/browse/HDFS-1889 Project: Hadoop HDFS Issue Type: Bug Reporter: John George Assignee: John George HADOOP_HOME in start-dfs.sh and stop-dfs.sh should be changed to HADOOP_HDFS_HOME because hdfs script is in the hdfs directory and not common directory -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Hadoop-Hdfs-trunk-Commit - Build # 621 - Still Failing
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/621/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 2829 lines...] [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 8.722 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.898 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.727 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode [junit] Tests run: 2, Failures: 2, Errors: 0, Time elapsed: 12.585 sec [junit] Test org.apache.hadoop.hdfs.server.namenode.TestBackupNode FAILED [junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 29.086 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.592 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.171 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 11.61 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 3.866 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.83 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.085 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.751 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.665 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestPendingReplication [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.302 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.058 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.681 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 9.49 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 7.906 sec [junit] Running org.apache.hadoop.net.TestNetworkTopology [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.099 sec [junit] Running org.apache.hadoop.security.TestPermission [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.73 sec checkfailure: [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:662: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:730: Tests failed! Total time: 8 minutes 31 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Recording fingerprints Archiving artifacts Recording test results Publishing Javadoc Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 2 tests failed. FAILED: org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint Error Message: null Stack Trace: junit.framework.AssertionFailedError: null at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:152) at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.__CLR3_0_2xuql33xvf(TestBackupNode.java:103) at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:101) FAILED: org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testBackupRegistration Error Message: Only one backup node should be able to start Stack Trace: junit.
[jira] [Created] (HDFS-1890) A few improvements on the LeaseRenewer.pendingCreates map
A few improvements on the LeaseRenewer.pendingCreates map - Key: HDFS-1890 URL: https://issues.apache.org/jira/browse/HDFS-1890 Project: Hadoop HDFS Issue Type: Improvement Reporter: Tsz Wo (Nicholas), SZE Assignee: Tsz Wo (Nicholas), SZE Priority: Minor - The class is better to be just a {{Map}} instead of a {{SortedMap}}. - The value type is better to be {{DFSOutputStream}} instead of {{OutputStream}}. - The variable name is better to be filesBeingWritten instead of pendingCreates since we have append. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1891) TestBackupNode fails intermittently
TestBackupNode fails intermittently --- Key: HDFS-1891 URL: https://issues.apache.org/jira/browse/HDFS-1891 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.23.0 Reporter: Suresh Srinivas Assignee: Giridharan Kesavan TestBackupNode fails due to unexpected ipv6 address format. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Hadoop-Hdfs-trunk-Commit - Build # 622 - Still Failing
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/622/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 2830 lines...] [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 8.153 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.785 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.738 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode [junit] Tests run: 2, Failures: 2, Errors: 0, Time elapsed: 12.772 sec [junit] Test org.apache.hadoop.hdfs.server.namenode.TestBackupNode FAILED [junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 29.012 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.595 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.164 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 11.844 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.187 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.83 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.087 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.803 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.56 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestPendingReplication [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.305 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.055 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.345 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 9.375 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 7.876 sec [junit] Running org.apache.hadoop.net.TestNetworkTopology [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.111 sec [junit] Running org.apache.hadoop.security.TestPermission [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.877 sec checkfailure: [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:662: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:730: Tests failed! Total time: 9 minutes 46 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Recording fingerprints Archiving artifacts Recording test results Publishing Javadoc Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 3 tests failed. REGRESSION: org.apache.hadoop.cli.TestHDFSCLI.testAll Error Message: One of the tests failed. See the Detailed results to identify the command that failed Stack Trace: junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264) at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126) at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81) FAILED: org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint Error Message: null Stack
Hadoop-Hdfs-trunk-Commit - Build # 623 - Still Failing
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/623/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 2830 lines...] [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 8.606 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.964 sec [junit] Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.713 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode [junit] Tests run: 2, Failures: 2, Errors: 0, Time elapsed: 12.804 sec [junit] Test org.apache.hadoop.hdfs.server.namenode.TestBackupNode FAILED [junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 28.794 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.666 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.173 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 11.889 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.251 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.87 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.085 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.662 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.781 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestPendingReplication [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.305 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.068 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.204 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 9.359 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 7.808 sec [junit] Running org.apache.hadoop.net.TestNetworkTopology [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.114 sec [junit] Running org.apache.hadoop.security.TestPermission [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.89 sec checkfailure: [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:662: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:730: Tests failed! Total time: 8 minutes 50 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Recording fingerprints Archiving artifacts Recording test results Publishing Javadoc Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 3 tests failed. FAILED: org.apache.hadoop.cli.TestHDFSCLI.testAll Error Message: One of the tests failed. See the Detailed results to identify the command that failed Stack Trace: junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264) at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126) at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81) FAILED: org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint Error Message: null Stack Trac
[jira] [Created] (HDFS-1892) Update HDFS-1073 branch to deal with OP_INVALID-filled preallocation
Update HDFS-1073 branch to deal with OP_INVALID-filled preallocation Key: HDFS-1892 URL: https://issues.apache.org/jira/browse/HDFS-1892 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Todd Lipcon Assignee: Todd Lipcon -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1893) Change edit logs and images to be named based on txid
Change edit logs and images to be named based on txid - Key: HDFS-1893 URL: https://issues.apache.org/jira/browse/HDFS-1893 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Todd Lipcon Assignee: Todd Lipcon -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-1894) Add constants for LAYOUT_VERSIONs in edits log branch
Add constants for LAYOUT_VERSIONs in edits log branch - Key: HDFS-1894 URL: https://issues.apache.org/jira/browse/HDFS-1894 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Todd Lipcon Assignee: Todd Lipcon -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1894) Add constants for LAYOUT_VERSIONs in edits log branch
[ https://issues.apache.org/jira/browse/HDFS-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HDFS-1894. --- Resolution: Fixed Committed to branch under CTR policy. > Add constants for LAYOUT_VERSIONs in edits log branch > - > > Key: HDFS-1894 > URL: https://issues.apache.org/jira/browse/HDFS-1894 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: name-node >Affects Versions: Edit log branch (HDFS-1073) >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Fix For: Edit log branch (HDFS-1073) > > Attachments: hdfs-1894.txt, hdfs-1894.txt > > > When merging from trunk into branch, it's pretty difficult to resolve > conflicts around the layout versions, since trunk keeps swallowing whatever > layout version I've picked in the branch. Adding a couple of constants will > make the merges much easier. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
about how the hdfs choose datanodes to store the files
Hi: all! we know that the hdfs divide a large file into several blocks(with each 64mb, 3 replications default). and once the metadata in the namenode are modified, there goes a thread dataStreamer to transport the blocks to the datanode. for each block, the client send the block to the 3 datanodes with a pipeline. dfsClient.namenode.create(src, masked, dfsClient.clientName, new EnumSetWritable(flag), createParent, replication, blockSize); streamer = new DataStreamer(); streamer.start(); I just wondering how the cluster choose which datanodes to store the blocks. what policy? and as we know there may be plenty of blocks for a file. and what's the sequences is for these blocks to be transported, cos from what I read from the code, there is only one thread to do this from the client to the datanodes. any answer or url are appreciated.thanks best regards! xu