Build failed in Jenkins: Hadoop-Hdfs-trunk #2381

2015-10-01 Thread Apache Jenkins Server
See 

Changes:

[rkanter] MAPREDUCE-6494. Permission issue when running archive-logs tool as

--
[...truncated 7380 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.301 sec - in 
org.apache.hadoop.hdfs.TestSetrepDecreasing
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.451 sec - in 
org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.533 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.347 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.224 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.607 sec - in 
org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.243 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 17, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 136.291 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.314 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.846 sec - in 
org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.698 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.508 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.052 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.556 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.415 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.774 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.981 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.558 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.272 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.853 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.972 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.94 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.196 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.038 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.429 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.114 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.309 sec - in 
org.apache.hadoop.fs.contract

Hadoop-Hdfs-trunk - Build # 2381 - Still Failing

2015-10-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2381/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7573 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:58 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:27 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.061 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:31 h
[INFO] Finished at: 2015-10-01T07:14:52+00:00
[INFO] Final Memory: 56M/626M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating MAPREDUCE-6494
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
14 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling

Error Message:
Scanner took too long to shutdown

Stack Trace:
java.lang.AssertionError: Scanner took too long to shutdown
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:677)


FAILED:  
org.apache.hadoop.hdfs.web.TestWebHDFSOAuth2.listStatusReturnsAsExpected

Error Message:
Unable to load OAuth2 connection factory.

Stack Trace:
java.io.IOException: Unable to load OAuth2 connection factory.
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:146)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:164)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:81)
at 
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:215)
at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131)
at 
org.apache.hadoop.hdfs.web.URLConnectionFactory.newSslConnConfigurator(URLConnectionFactory.java:135)
at 
org.apache.hadoop.hdfs.web.URLConnectionFactory.newOAuth2URLConnectionFactory(URLConnectionFactory.java:110)
at 
org.apache.hadoop.hdfs.web.W

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #442

2015-10-01 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] MAPREDUCE-6497. Fix wrong value of JOB_FINISHED event in

--
[...truncated 7473 lines...]

testLeaseRecoverByAnotherUser(org.apache.hadoop.hdfs.TestLeaseRecovery2)  Time 
elapsed: 0 sec  <<< ERROR!
java.lang.IllegalStateException: Lease monitor is not running
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:449)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:140)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2585)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:159)

testHardLeaseRecovery(org.apache.hadoop.hdfs.TestLeaseRecovery2)  Time elapsed: 
0.004 sec  <<< ERROR!
java.io.EOFException: End of File Exception between local host is: 
"asf905.gq1.ygridcore.net/67.195.81.149"; destination host is: 
"localhost":53243; : java.io.EOFException; For more details see:  
http://wiki.apache.org/hadoop/EOFException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
at org.apache.hadoop.ipc.Client.call(Client.java:1452)
at org.apache.hadoop.ipc.Client.call(Client.java:1379)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy20.create(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy21.create(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:244)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1234)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1167)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:419)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:415)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:430)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:358)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:919)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:900)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:276)
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1106)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1001)

org.apache.hadoop.hdfs.TestLeaseRecovery2  Time elapsed: 0.019 sec  <<< FAILURE!
java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1849)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1836)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1829)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:105)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.488 sec - in 
org.apache.hadoop.hdfs.TestParallelRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteStripedFileWithFailure
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time e

Hadoop-Hdfs-trunk-Java8 - Build # 442 - Still Failing

2015-10-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/442/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7666 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:26 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:20 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.085 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:23 h
[INFO] Finished at: 2015-10-01T12:11:48+00:00
[INFO] Final Memory: 75M/789M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter320431791146477453.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire8908106908284537751tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2351492371738126687122tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating MAPREDUCE-6497
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
5 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals 
to persistent storage due to No journals available to flush. Unsynced 
transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1207)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHA

Hadoop-Hdfs-trunk - Build # 2382 - Still Failing

2015-10-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2382/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7658 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:24 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:19 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.063 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:23 h
[INFO] Finished at: 2015-10-01T13:11:59+00:00
[INFO] Final Memory: 55M/604M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating MAPREDUCE-6497
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
19 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN

Error Message:
new checkpoint does not exist

Stack Trace:
java.lang.AssertionError: new checkpoint does not exist
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.verifyNNCheckpoint(TestRollingUpgrade.java:619)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpoint(TestRollingUpgrade.java:596)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN(TestRollingUpgrade.java:565)


FAILED:  org.apache.hadoop.hdfs.TestSeekBug.testSeekBugDFS

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: 
org/apache/hadoop/security/authentication/server/AuthenticationFilter
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at 
org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:447)
at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:339)
 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2382

2015-10-01 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] MAPREDUCE-6497. Fix wrong value of JOB_FINISHED event in

--
[...truncated 7465 lines...]
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.366 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.258 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.602 sec - in 
org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.181 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 17, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 136.429 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.335 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.819 sec - in 
org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.811 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.474 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.043 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.495 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.366 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.784 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.988 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.577 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.28 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.85 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.938 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.079 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.456 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.035 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.355 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.081 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.384 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.096 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.154 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Runn

[jira] [Created] (HDFS-9186) Simplify embedding libhdfspp into other projects

2015-10-01 Thread James Clampffer (JIRA)
James Clampffer created HDFS-9186:
-

 Summary: Simplify embedding libhdfspp into other projects
 Key: HDFS-9186
 URL: https://issues.apache.org/jira/browse/HDFS-9186
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


I'd like to add a script to the root libhdfspp directory that can prune 
anything that libhdfspp doesn't need to compile out of the hadoop source tree.  

This way the project is a lot smaller if it's going to be included in a 
third-party directory of another project.  The directory structure, aside from 
missing directories, is preserved so modifications can be diffed against a 
fresh checkout of the source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9187) Check if tracer is null before using it

2015-10-01 Thread stack (JIRA)
stack created HDFS-9187:
---

 Summary: Check if tracer is null before using it
 Key: HDFS-9187
 URL: https://issues.apache.org/jira/browse/HDFS-9187
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.8.0
Reporter: stack


Saw this where an hbase that has not been updated to htrace-4.0.1 was trying to 
start:

{code}
Oct 1, 5:12:11.861 AM FATAL org.apache.hadoop.hbase.master.HMaster
Failed to become active master
java.lang.NullPointerException
at org.apache.hadoop.fs.Globber.glob(Globber.java:145)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1634)
at org.apache.hadoop.hbase.util.FSUtils.getTableDirs(FSUtils.java:1372)
at 
org.apache.hadoop.hbase.util.FSTableDescriptors.getAll(FSTableDescriptors.java:206)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:619)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 443 - Still Failing

2015-10-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/443/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7398 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:28 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:55 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.054 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:58 h
[INFO] Finished at: 2015-10-01T18:46:43+00:00
[INFO] Final Memory: 70M/765M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter2244777698520510164.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire147110023986265922tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1611634274348245402401tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-10296
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 7
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 24
  done: false
] expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! 
Current writer state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 7
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 24
 done: false
] expected:<0> but 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #443

2015-10-01 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-10296. Incorrect null check in 
SwiftRestClient#buildException().

--
[...truncated 7205 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.851 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeTransferSocketSize
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.161 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeTransferSocketSize
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.221 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.317 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeExit
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.116 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeExit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestIncrementalBrVariations
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.672 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestIncrementalBrVariations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.459 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.316 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.016 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.658 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataDirs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.754 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataDirs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestStorageReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.809 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestStorageReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.251 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeECN
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.887 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeECN
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeStartupOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.492 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDatanodeStartupOptions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 

Hadoop-Hdfs-trunk - Build # 2383 - Still Failing

2015-10-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2383/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7626 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:24 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:00 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.090 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:03 h
[INFO] Finished at: 2015-10-01T20:52:44+00:00
[INFO] Final Memory: 54M/515M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-10296
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
18 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure$SlowWriter.checkReplication(TestReplaceDatanodeOnFailure.java:235)
at 
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure(TestReplaceDatanodeOnFailure.java:154)


FAILED:  
org.apache.hadoop.hdfs.TestRollingUpgrade.testDFSAdminRollingUpgradeCommands

Error Message:
expected null, but 
was:

Stack Trace:
java.lang.AssertionError: expected null, but 
was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotNull(Assert.java:664)
at org.junit.Assert.assertNull(Assert.java:646)
at org.junit.Assert.assertNull(Assert.java:656)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.checkMxBeanIsNull(TestRollingUpgrade.java:293)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testDFSAdminRollingUpgradeCommands(TestRollingUpgrade.java:101)


FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback

Error Message:
expec

Build failed in Jenkins: Hadoop-Hdfs-trunk #2383

2015-10-01 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-10296. Incorrect null check in 
SwiftRestClient#buildException().

--
[...truncated 7433 lines...]
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.954 sec - 
in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.414 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.179 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.12 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.892 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.546 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.85 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.427 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.47 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.527 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.7 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.754 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.178 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.134 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.252 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.478 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.832 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.174 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.036 sec - 
in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.191 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.661 sec - in 
org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.82 sec - in 
org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.502 sec - in 
org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.419 sec - in 
org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.95 sec - in 
org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.624 sec - in 
org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.607 sec - in 
org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Sk

[jira] [Created] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-01 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-9188:
---

 Summary: Make block corruption related tests FsDataset-agnostic. 
 Key: HDFS-9188
 URL: https://issues.apache.org/jira/browse/HDFS-9188
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS, test
Affects Versions: 2.7.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


Currently, HDFS does block corruption tests by directly accessing the files 
stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
dataset implementation. However, with works like OZone (HDFS-7240) and 
HDFS-8679, there will be different FsDataset implementations. 

So we need a general way to run whitebox tests like corrupting blocks and crc 
files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #444

2015-10-01 Thread Apache Jenkins Server
See 

Changes:

[zxu] HADOOP-8437. getLocalPathForWrite should throw IOException for invalid

--
[...truncated 6613 lines...]
Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.844 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeRegister
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.628 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDatanodeRegister
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.79 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.696 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.651 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.503 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.257 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 121.191 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataXceiverLazyPersistHint
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.095 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataXceiverLazyPersistHint
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestParameterParser
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.628 sec - in 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestParameterParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.web.dtp.TestDtpHttp2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.685 sec - in 
org.apache.hadoop.hdfs.server.datanode.web.dtp.TestDtpHttp2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.743 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.479 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.624 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.15 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork
Java HotS

Hadoop-Hdfs-trunk-Java8 - Build # 444 - Still Failing

2015-10-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/444/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6806 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:18 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:08 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.111 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:12 h
[INFO] Finished at: 2015-10-01T21:29:35+00:00
[INFO] Final Memory: 75M/1050M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5156765606729067653.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire8905785251622345347tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_155342445125202532201tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-8437
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #2384

2015-10-01 Thread Apache Jenkins Server
See 

Changes:

[zxu] HADOOP-8437. getLocalPathForWrite should throw IOException for invalid

--
[...truncated 28807 lines...]
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestWriteReadStripedFile.setup:59 » NoClassDefFound Could not initialize 
class...
  TestDFSClientExcludedNodes.testExcludedNodes:62 » NoClassDefFound 
org/apache/h...
  TestDFSClientExcludedNodes.testExcludedNodesForgiveness:94 » NoClassDefFound 
C...
  TestDatanodeReport.testDatanodeReport:55 » NoClassDefFound 
org/apache/hadoop/i...
  TestBlockReaderFactory.testShortCircuitCacheTemporaryFailure:230 » 
NoClassDefFound
  TestBlockReaderFactory.testShortCircuitCacheShutdown:395 » NoClassDefFound 
Cou...
  TestBlockReaderFactory.testMultipleWaitersOnShortCircuitCache:159 » 
NoClassDefFound
  TestBlockReaderFactory.testShortCircuitReadFromServerWithoutShm:313 » 
NoClassDefFound
  TestBlockReaderFactory.testShortCircuitReadFromClientWithoutShm:359 » 
NoClassDefFound
  TestBlockReaderFactory.testPurgingClosedReplicas:452 » NoClassDefFound Could 
n...
  TestBlockReaderFactory.testFallbackFromShortCircuitToUnixDomainTraffic:112 » 
NoClassDefFound
  TestMiniDFSCluster.testDualClusters:89 » NoClassDefFound 
org/apache/hadoop/io/...
  TestMiniDFSCluster.testClusterSetDatanodeHostname:135 » NoClassDefFound Could 
...
  TestMiniDFSCluster.testClusterNoStorageTypeSetForDatanodes:171 » 
NoClassDefFound
  TestMiniDFSCluster.testIsClusterUpAfterShutdown:114 » NoClassDefFound Could 
no...
  TestMiniDFSCluster.testClusterSetDatanodeDifferentStorageType:153 » 
NoClassDefFound
  TestMiniDFSCluster.testClusterWithoutSystemProperties:69 » NoClassDefFound 
Cou...
  TestBlocksScheduledCounter.testBlocksScheduledCounter:54 » NoClassDefFound 
org...
  TestDFSStorageStateRecovery.setUp:449 » NoClassDefFound 
org/apache/hadoop/io/e...
  TestDFSStorageStateRecovery.setUp:449 » NoClassDefFound Could not initialize 
c...
  TestDFSStorageStateRecovery.setUp:449 » NoClassDefFound Could not initialize 
c...
  TestSnapshotCommands.clusterSetUp:45 » NoClassDefFound 
org/apache/hadoop/io/er...
  
TestParallelShortCircuitRead.setupCluster:47->TestParallelReadUtil.setupCluster:70
 » NoClassDefFound
  
TestParallelShortCircuitRead.teardownCluster:59->TestParallelReadUtil.teardownCluster:393
 » NullPointer
  TestDFSPermission.setUp:118 » NoClassDefFound 
org/apache/hadoop/io/erasurecode...
  TestDFSPermission.setUp:118 » NoClassDefFound Could not initialize class 
org.a...
  TestDFSPermission.setUp:118 » NoClassDefFound Could not initialize class 
org.a...
  TestDFSPermission.setUp:118 » NoClassDefFound Could not initialize class 
org.a...
  TestDFSPermission.setUp:118 » NoClassDefFound Could not initialize class 
org.a...
  TestDFSPermission.setUp:118 » NoClassDefFound Could not initialize class 
org.a...
  TestDFSPermission.setUp:118 » NoClassDefFound Could not initialize class 
org.a...
  TestDFSPermission.setUp:118 » NoClassDefFound Could not initialize class 
org.a...
  TestParallelRead.setupCluster:37->TestParallelReadUtil.setupCluster:70 » 
NoClassDefFound
  TestParallelRead.teardownCluster:42->TestParallelReadUtil.teardownCluster:393 
» NullPointer
  TestDFSStripedOutputStream.setup:57 » NoClassDefFound 
org/apache/hadoop/io/era...
  TestDFSStripedOutputStream.setup:57 » NoClassDefFound Could not initialize 
cla...
  TestDFSStripedOutputStream.setup:57 » NoClassDefFound Could not initialize 
cla...
  TestDFSStripedOutputStream.setup:57 » NoClassDefFound Could not initialize 
cla...
  TestDFSStripedOutputStream.setup:57 » NoClassDefFound Could not initialize 
cla...
  TestDFSStripedOutputStream.setup:57 » NoClassDefFound Could not initialize 
cla...
  TestDFSStripedOutputStream.setup:57 » NoClassDefFound Could not initialize 
cla...
  TestDFSStripedOutputStream.setup:57 » NoClassDefFound Could not initialize 
cla...
  TestDFSStripedOutputStream.setup:57 » NoClassDefFound Could not initialize 
cla...
  TestDFSStr

Re: INotify stability

2015-10-01 Thread Mohammad Islam
Sorry for the late reply.Thanks ATM, Colin and others. Indeed applying the 
patch took care of most of the memory issues. Now we are tackling the sudden 
increase in the number of active threads. We will post more concrete problem 
definition shortly.
Regards,Mohammad


 


 On Tuesday, September 22, 2015 9:29 AM, Colin P. McCabe 
 wrote:
   

 Hi Mohammad,

Like ATM said, HDFS-8965 is an important fix in this area.  We have
found that it prevents cases where INotify tries to read invalid
sequences of bytes (sometimes because the edit log was truncated or
corrupted; other times because it is in the middle of a write).
HDFS-8964 fixes the "attempt to read edit log entries that are still
being written."  Both fixes have value when using INotify.

The other question is how many INotify clients you have
simultaneously?  Right now, INotify works best with a small number of
clients, since it involves doing RPCs to the NameNode.  There is some
work going on to scale out INotify to support many clients in
HDFS-8940.

best,
Colin

On Tue, Sep 15, 2015 at 11:52 AM, Mohammad Islam
 wrote:
> Hi,We were using INotify feature in one of our internal service. Looks like 
> it creates a lot of memory pressure on NN. Memory usage goes very high and 
> remains the same causing expensive GC.
> Did anyone use this feature in any service? Is there any con to setup? We are 
> using latest CDH.
> Regards,Mohammad
>