See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1202/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE 
###########################
[...truncated 9132 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
    [mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:37 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:07 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.101 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:12 h
[INFO] Finished at: 2016-05-13T01:25:59+00:00
[INFO] Final Memory: 70M/878M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) 
##############################
4 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs[1]

Error Message:
logging edit without syncing should do not affect txid expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: logging edit without syncing should do not affect 
txid expected:<1> but was:<2>
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.failNotEquals(Assert.java:743)
        at org.junit.Assert.assertEquals(Assert.java:118)
        at org.junit.Assert.assertEquals(Assert.java:555)
        at 
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs(TestEditLog.java:594)


FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:55386,DS-5edeebf2-d758-408e-b9ea-dfd03ef2db60,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:47556,DS-b4bcfc89-1628-4302-9cf3-b15a3f7e0ce9,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:47556,DS-b4bcfc89-1628-4302-9cf3-b15a3f7e0ce9,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:55386,DS-5edeebf2-d758-408e-b9ea-dfd03ef2db60,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:55386,DS-5edeebf2-d758-408e-b9ea-dfd03ef2db60,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:47556,DS-b4bcfc89-1628-4302-9cf3-b15a3f7e0ce9,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:47556,DS-b4bcfc89-1628-4302-9cf3-b15a3f7e0ce9,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:55386,DS-5edeebf2-d758-408e-b9ea-dfd03ef2db60,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
        at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1170)
        at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1236)
        at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1427)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1342)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:603)


FAILED:  
org.apache.hadoop.hdfs.TestRenameWhileOpen.testWhileOpenRenameToNonExistentDirectory

Error Message:
Problem binding to [localhost:36777] java.net.BindException: Address already in 
use; For more details see:  http://wiki.apache.org/hadoop/BindException

Stack Trace:
java.net.BindException: Problem binding to [localhost:36777] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:414)
        at sun.nio.ch.Net.bind(Net.java:406)
        at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:530)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:793)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2592)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:958)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:563)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:435)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
        at 
org.apache.hadoop.hdfs.TestRenameWhileOpen.testWhileOpenRenameToNonExistentDirectory(TestRenameWhileOpen.java:325)


FAILED:  
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithKeytabs

Error Message:
test timed out after 300000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 300000 milliseconds
        at java.lang.Thread.sleep(Native Method)
        at 
org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:705)
        at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:1098)
        at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.access$000(TestBalancer.java:125)



---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to