See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2458/
################################################################################### ########################## LAST 60 LINES OF THE CONSOLE ########################### [...truncated 7787 lines...] [INFO] Executing tasks main: [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir [INFO] Executed tasks [INFO] [INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project --- [INFO] Skipping javadoc generation [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project --- [INFO] [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project --- [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:44 min] [INFO] Apache Hadoop HDFS ................................ FAILURE [ 03:54 h] [INFO] Apache Hadoop HDFS Native Client .................. SKIPPED [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.068 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 03:58 h [INFO] Finished at: 2015-10-21T18:47:56+00:00 [INFO] Final Memory: 55M/723M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures. [ERROR] [ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hadoop-hdfs Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Updating MAPREDUCE-6489 Sending e-mails to: hdfs-dev@hadoop.apache.org Email was triggered for: Failure - Any Sending email for trigger: Failure - Any ################################################################################### ############################## FAILED TESTS (if any) ############################## 4 tests failed. FAILED: org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure Error Message: expected:<3> but was:<2> Stack Trace: java.lang.AssertionError: expected:<3> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure$SlowWriter.checkReplication(TestReplaceDatanodeOnFailure.java:235) at org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure(TestReplaceDatanodeOnFailure.java:154) FAILED: org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount Error Message: Timeout: excess replica count not equal to 2 for block blk_1073741825_1001 after 20000 msec. Last counts: live = 2, excess = 0, corrupt = 0 Stack Trace: java.util.concurrent.TimeoutException: Timeout: excess replica count not equal to 2 for block blk_1073741825_1001 after 20000 msec. Last counts: live = 2, excess = 0, corrupt = 0 at org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:152) at org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:146) at org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:130) FAILED: org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks.testSetrepIncWithUnderReplicatedBlocks Error Message: test timed out after 60000 milliseconds Stack Trace: java.lang.Exception: test timed out after 60000 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.shell.SetReplication.waitForReplication(SetReplication.java:127) at org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:77) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) at org.apache.hadoop.fs.shell.Command.run(Command.java:166) at org.apache.hadoop.fs.FsShell.run(FsShell.java:309) at org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks.testSetrepIncWithUnderReplicatedBlocks(TestUnderReplicatedBlocks.java:70) FAILED: org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot.testSaveLoadImageWithAppending Error Message: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:37249,DS-ec7f85c5-764b-42aa-97bb-a3de71498174,DISK], DatanodeInfoWithStorage[127.0.0.1:50583,DS-53dc247b-4430-475a-9196-45a346227db9,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:37249,DS-ec7f85c5-764b-42aa-97bb-a3de71498174,DISK], DatanodeInfoWithStorage[127.0.0.1:50583,DS-53dc247b-4430-475a-9196-45a346227db9,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. Stack Trace: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:37249,DS-ec7f85c5-764b-42aa-97bb-a3de71498174,DISK], DatanodeInfoWithStorage[127.0.0.1:50583,DS-53dc247b-4430-475a-9196-45a346227db9,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:37249,DS-ec7f85c5-764b-42aa-97bb-a3de71498174,DISK], DatanodeInfoWithStorage[127.0.0.1:50583,DS-53dc247b-4430-475a-9196-45a346227db9,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1163) at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1233) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1424) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1339) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1322) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:596)