See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2343/changes>

Changes:

[harsh] MAPREDUCE-5045. UtilTest#isCygwin method appears to be unused. 
Contributed by Neelesh Srinivas Salian.

------------------------------------------
[...truncated 11858 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.457 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.039 sec - in 
org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.82 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestModTime
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.72 sec - in 
org.apache.hadoop.hdfs.TestModTime
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.228 sec - in 
org.apache.hadoop.hdfs.TestDFSConfigKeys
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.805 sec - in 
org.apache.hadoop.hdfs.TestFsShellPermission
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.76 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.575 sec - in 
org.apache.hadoop.hdfs.TestDeprecatedKeys
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.63 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.928 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 8, Failures: 1, Errors: 3, Skipped: 0, Time elapsed: 52.237 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestLeaseRecovery2
testHardLeaseRecoveryWithRenameAfterNameNodeRestart(org.apache.hadoop.hdfs.TestLeaseRecovery2)
  Time elapsed: 4.208 sec  <<< ERROR!
org.apache.hadoop.util.ExitUtil$ExitException: 
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals 
to persistent storage due to No journals available to flush. Unsynced 
transactions: 1
        at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1735)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:880)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
        at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
        at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

        at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
        at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1704)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1739)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:880)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
        at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
        at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)

testLeaseRecoverByAnotherUser(org.apache.hadoop.hdfs.TestLeaseRecovery2)  Time 
elapsed: 0 sec  <<< ERROR!
java.lang.IllegalStateException: Lease monitor is not running
        at 
com.google.common.base.Preconditions.checkState(Preconditions.java:145)
        at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:448)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2586)
        at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)

testHardLeaseRecovery(org.apache.hadoop.hdfs.TestLeaseRecovery2)  Time elapsed: 
0.003 sec  <<< ERROR!
java.io.EOFException: End of File Exception between local host is: 
"asf906.gq1.ygridcore.net/67.195.81.150"; destination host is: 
"localhost":59200; : java.io.EOFException; For more details see:  
http://wiki.apache.org/hadoop/EOFException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
        at org.apache.hadoop.ipc.Client.call(Client.java:1449)
        at org.apache.hadoop.ipc.Client.call(Client.java:1376)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy21.create(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
        at com.sun.proxy.$Proxy24.create(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1224)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1155)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:416)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
        at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)
Caused by: java.io.EOFException: null
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1103)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:998)

org.apache.hadoop.hdfs.TestLeaseRecovery2  Time elapsed: 0.022 sec  <<< FAILURE!
java.lang.AssertionError: Test resulted in an unexpected exit
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1848)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
        at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)

Running org.apache.hadoop.hdfs.TestDFSFinalize
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.62 sec - in 
org.apache.hadoop.hdfs.TestDFSFinalize
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.292 sec - 
in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.43 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.991 sec - in 
org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.256 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.684 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.TestBlockReaderFactory
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.677 sec - in 
org.apache.hadoop.hdfs.TestBlockReaderFactory
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.95 sec - in 
org.apache.hadoop.hdfs.TestListFilesInFileContext
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.989 sec - in 
org.apache.hadoop.hdfs.TestReplication
Running org.apache.hadoop.hdfs.TestRemoteBlockReader

Results :

Failed tests: 
  TestDataNodeMetrics.testDataNodeTimeSpend:288 null
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit

Tests in error: 
  TestRetryCacheWithHA.testCreateSymlink:1199->testClientRetryWithFailover:1308 
» NoClassDefFound
  
TestRetryCacheWithHA.testAddCacheDirectiveInfo:1217->testClientRetryWithFailover:1308
 » NoClassDefFound
  
TestRetryCacheWithHA.testModifyCachePool:1251->testClientRetryWithFailover:1284 
» 
  
TestRetryCacheWithHA.testModifyCacheDirectiveInfo:1229->testClientRetryWithFailover:1284
 » 
  TestRetryCacheWithHA.testAddCachePool:1244->testClientRetryWithFailover:1308 
» NoClassDefFound
  TestBalancerWithMultipleNameNodes.testBalancer » Remote File /tmp.txt could 
on...
  TestWebHDFSOAuth2.listStatusReturnsAsExpected:147 » IO Unable to load OAuth2 
c...
  
TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:493
 » Exit
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease 
moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » EOF End of File Exception 
betwe...

Tests run: 2497, Failures: 2, Errors: 5, Skipped: 6

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
    [mkdir] Created dir: 
<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:18 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:16 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.065 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:19 h
[INFO] Finished at: 2015-09-22T19:19:45+00:00
[INFO] Final Memory: 70M/897M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs>
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8395120449874804886.jar>
 
<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5037579371859288211tmp>
 
<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2176253086935941022721tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2342
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4625126 bytes
Compression is 0.0%
Took 10 sec
Recording test results
Updating MAPREDUCE-5045

Reply via email to