Hadoop-Hdfs-trunk - Build # 3098 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3098/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7187 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:30 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:32 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.101 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:37 h
[INFO] Finished at: 2016-05-04T09:02:19+00:00
[INFO] Final Memory: 60M/916M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
8 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncRenameWithOverwrite

Error Message:
test timed out after 12 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 12 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:825)
at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:784)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:430)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:379)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:372)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:365)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.internalTestConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:221)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:184)


FAILED:  
org.apache.hadoop.hdfs.TestAsyncDFSRename.testCallGetReturnValueMultipleTimes

Error Message:
Can

Build failed in Jenkins: Hadoop-Hdfs-trunk #3098

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[iwasakims] HADOOP-12101. Add automatic search of default Configuration 
variables to

--
[...truncated 6994 lines...]
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.148 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 382.995 sec - 
in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.771 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.048 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 226.632 sec - 
in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.552 sec - in 
org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.683 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.397 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.42 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.609 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.36 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.171 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 139.102 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.889 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 163.199 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.146 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.49 sec - 
in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.138 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.324 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.TestGenericRefresh
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.662 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.562 sec - in 
org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.303 sec - in 
org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.799 sec - in 
org.apache.hadoop.TestGenericRefresh
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.428 sec - 
in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.714 sec - in 
org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.356 sec - in 
org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1167

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[rohithsharmaks] YARN-4947. Test timeout is happening for 
TestRMWebServicesNodes.

--
[...truncated 5781 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.93 sec - in 
org.apache.hadoop.fs.TestUnbuffer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.523 sec - in 
org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 17.441 sec - 
in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec - in 
org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.795 sec - 
in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.173 sec - in 
org.apache.hadoop.fs.shell.TestHdfsTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.281 sec - 
in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.349 sec - in 
org.apache.hadoop.fs.TestResolveHdfsSymlink
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.545 sec - 
in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.364 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.468 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.855 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.477 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.103 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.472 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.686 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.9

Hadoop-Hdfs-trunk-Java8 - Build # 1167 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1167/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5974 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:07 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:41 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.097 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:45 h
[INFO] Finished at: 2016-05-04T09:32:00+00:00
[INFO] Final Memory: 57M/472M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout

Error Message:
write timedout too late in 1258 ms.

Stack Trace:
java.io.IOException: write timedout too late in 1258 ms.
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.OutputStream.write(OutputStream.java:75)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1102)




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Hadoop-Hdfs-trunk-Java8 - Build # 1168 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1168/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5979 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:29 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:39 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.113 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:44 h
[INFO] Finished at: 2016-05-04T13:19:31+00:00
[INFO] Final Memory: 57M/640M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
The stream is closed

Stack Trace:
java.io.IOException: The stream is closed
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at 
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:881)
at 
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:730)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:725)




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1168

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[iwasakims] HADOOP-12101. Add automatic search of default Configuration 
variables to

--
[...truncated 5786 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.616 sec - in 
org.apache.hadoop.fs.TestUnbuffer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.037 sec - in 
org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 16.491 sec - 
in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec - in 
org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.318 sec - 
in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.331 sec - in 
org.apache.hadoop.fs.shell.TestHdfsTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.507 sec - 
in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.167 sec - in 
org.apache.hadoop.fs.TestResolveHdfsSymlink
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.279 sec - in 
org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.093 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.167 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.517 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.05 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.386 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.82 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.255 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.3

Build failed in Jenkins: Hadoop-Hdfs-trunk #3099

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-12504. Remove metrics v1. (aajisaka)

--
[...truncated 7731 lines...]
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.38 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.508 sec - in 
org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestRead
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.069 sec - in 
org.apache.hadoop.hdfs.TestFileChecksum
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.062 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.617 sec - in 
org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 15.001 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.1 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestFileStatusWithECPolicy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.422 sec - 
in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.403 sec - in 
org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.087 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.923 sec - in 
org.apache.hadoop.hdfs.TestFileStatusWithECPolicy
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.54 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.793 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.02 sec - in 
org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec - in 
org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.563 sec - in 
org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.962 sec - in 
org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.952 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.645 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.434 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.962 sec - in 
org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
Running org.apache.hadoop.hdfs.TestFetchImage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.101 sec - in 
org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.691 sec - in 
org.apache.hadoop.hdfs.TestFetchImage
Running org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.639 sec - 
in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.477 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010
Tests run: 2, 

Hadoop-Hdfs-trunk - Build # 3099 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3099/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7924 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [07:23 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:24 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.193 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:32 h
[INFO] Finished at: 2016-05-04T16:20:41+00:00
[INFO] Final Memory: 59M/902M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
11 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty

Error Message:
Problem binding to [localhost:9820] java.net.BindException: Address already in 
use; For more details see:  http://wiki.apache.org/hadoop/BindException

Stack Trace:
java.net.BindException: Problem binding to [localhost:9820] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:530)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
at org.apache.hadoop.ipc.Server.(Server.java:2592)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:426)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
  

[jira] [Created] (HDFS-10362) TestDataNodeLifeline always fails on trunk on mac.

2016-05-04 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-10362:
-

 Summary: TestDataNodeLifeline always fails on trunk on mac.
 Key: HDFS-10362
 URL: https://issues.apache.org/jira/browse/HDFS-10362
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Rushabh S Shah


TestDataNodeLifeline fails always on my local box but haven't seen any failure 
in jenkins build.
{noformat}
Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.861 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeLifeline
testSendLifelineIfHeartbeatBlocked(org.apache.hadoop.hdfs.server.datanode.TestDataNodeLifeline)
  Time elapsed: 18.81 sec  <<< FAILURE!
java.lang.AssertionError: Expect DataNode to be kept alive by lifeline. 
expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeLifeline.testSendLifelineIfHeartbeatBlocked(TestDataNodeLifeline.java:185)


Results :

Failed tests: 
  TestDataNodeLifeline.testSendLifelineIfHeartbeatBlocked:185 Expect DataNode 
to be kept alive by lifeline. expected:<1> but was:<0>
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10362) TestDataNodeLifeline always fails on trunk on mac.

2016-05-04 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah resolved HDFS-10362.
---
Resolution: Invalid

Apparently the test case fails when my laptop is connected to wifi and passed 
if connected via ethernet.
So closing as invalid.

> TestDataNodeLifeline always fails on trunk on mac.
> --
>
> Key: HDFS-10362
> URL: https://issues.apache.org/jira/browse/HDFS-10362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Rushabh S Shah
>
> TestDataNodeLifeline fails always on my local box but haven't seen any 
> failure in jenkins build.
> {noformat}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.861 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeLifeline
> testSendLifelineIfHeartbeatBlocked(org.apache.hadoop.hdfs.server.datanode.TestDataNodeLifeline)
>   Time elapsed: 18.81 sec  <<< FAILURE!
> java.lang.AssertionError: Expect DataNode to be kept alive by lifeline. 
> expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeLifeline.testSendLifelineIfHeartbeatBlocked(TestDataNodeLifeline.java:185)
> Results :
> Failed tests: 
>   TestDataNodeLifeline.testSendLifelineIfHeartbeatBlocked:185 Expect DataNode 
> to be kept alive by lifeline. expected:<1> but was:<0>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10359) Allow trigger block report from all datanodes

2016-05-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-10359.
--
Resolution: Won't Fix

I think there is momentum in this conversation towards a "Won't Fix" 
resolution, so I'm resolving the issue now.  [~Tao Jie], thank you for the 
discussion.  Even though this didn't lead to an enhancement, we appreciate the 
participation.

> Allow trigger block report from all datanodes
> -
>
> Key: HDFS-10359
> URL: https://issues.apache.org/jira/browse/HDFS-10359
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Tao Jie
>
> Since we have HDFS-7278 allows trigger block report from one certain 
> datanode. It would be helpful to add a option to this command to trigger 
> block report from all datanodes.
> Command maybe like this:
> *hdfs dfsadmin -triggerBlockReport \[-incremental\] 
> *



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10364) Log current node in reversexml tool when parse failed

2016-05-04 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-10364:


 Summary: Log current node in reversexml tool when parse failed
 Key: HDFS-10364
 URL: https://issues.apache.org/jira/browse/HDFS-10364
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Xiao Chen
Assignee: Xiao Chen
Priority: Trivial


Sometimes we want to modify the xml before converting it. If some error 
happened, it's hard to find out. Adding a line to tell where the failure is 
would be helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10363) Introduce new config keys for SCM service addresses

2016-05-04 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-10363:


 Summary: Introduce new config keys for SCM service addresses
 Key: HDFS-10363
 URL: https://issues.apache.org/jira/browse/HDFS-10363
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1169

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-12504. Remove metrics v1. (aajisaka)

--
[...truncated 5685 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.154 sec - in 
org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.169 sec - 
in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSafeMode
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.692 sec - in 
org.apache.hadoop.hdfs.TestSafeMode
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.725 sec - 
in org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.577 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.228 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.732 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.527 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.677 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.737 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.19 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.049 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.051 sec - in 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.549 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.562 sec - in 
org.apache.hadoop.hdfs.tools.TestGetGroups
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.18 sec - in 
org.apache.hadoo

Hadoop-Hdfs-trunk-Java8 - Build # 1169 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1169/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5878 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:11 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:20 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.478 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:24 h
[INFO] Finished at: 2016-05-04T18:11:12+00:00
[INFO] Final Memory: 69M/469M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8807566827775211528.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire2155751575760342705tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4172001895405668536613tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
The stream is closed

Stack Trace:
java.io.IOException: The stream is closed
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at 
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:881)
at 
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:730)
a

[jira] [Created] (HDFS-10365) FullBlockReports retransmission delays NN startup time in large cluster.

2016-05-04 Thread Chackaravarthy (JIRA)
Chackaravarthy created HDFS-10365:
-

 Summary: FullBlockReports retransmission delays NN startup time in 
large cluster.
 Key: HDFS-10365
 URL: https://issues.apache.org/jira/browse/HDFS-10365
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.6.0
 Environment: version - hadoop-2.6.0
DN - 1200 nodes
Reporter: Chackaravarthy
Priority: Critical


Whenever NN is restarted, it takes huge time for NN to come back to stable 
state. i.e. Last contact time remains more than 1 or 2 mins continuously for 
around 3 to 4 hours. This is mainly because most of the DN's getting timeout 
(60s) in blockReport (FBR) rpc call and then it keep sending FBR again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Hdfs-trunk #3100

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[arp] HADOOP-13068. Clean up RunJar and related test class. (Contributed by

[jing9] HADOOP-12469. distcp should not ignore the ignoreFailures option.

[junping_du] YARN-4920. ATS/NM should support a link to dowload/get the logs in 
text

--
[...truncated 5190 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.225 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.976 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.273 sec - in 
org.apache.hadoop.hdfs.TestSmallBlock
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.573 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.887 sec - in 
org.apache.hadoop.hdfs.web.TestAuthFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.296 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.049 sec - 
in org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.655 sec - in 
org.apache.hadoop.hdfs.web.TestJsonUtil
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.753 sec - 
in org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Running org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.221 sec - 
in org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.587 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 149.543 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.508 sec - in 
org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.014 sec - in 
org.apache.hadoop.hdfs.web.resources.TestParam
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.91 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFS
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter
Tests run: 64, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 29.572 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.586 sec - in 
org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.666 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.274 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.776 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.76 sec - in 
org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.security.TestPermission
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.606 sec - 
in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.419 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.774 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.663 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0,

Hadoop-Hdfs-trunk - Build # 3100 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3100/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5383 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:08 min]
[INFO] Apache Hadoop HDFS  FAILURE [53:47 min]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.101 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 57:57 min
[INFO] Finished at: 2016-05-04T18:49:34+00:00
[INFO] Final Memory: 63M/860M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs[1]

Error Message:
logging edit without syncing should do not affect txid expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: logging edit without syncing should do not affect 
txid expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs(TestEditLog.java:594)


FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputSt

Build failed in Jenkins: Hadoop-Hdfs-trunk #3101

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[wangda] YARN-4984. LogAggregationService shouldn't swallow exception in 
handling

--
[...truncated 3879 lines...]
Running org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.456 sec - in 
org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.451 sec - in 
org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Running org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in 
org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Running 
org.apache.hadoop.hdfs.web.oauth2.TestClientCredentialTimeBasedTokenRefresher
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.982 sec - in 
org.apache.hadoop.hdfs.web.oauth2.TestClientCredentialTimeBasedTokenRefresher
Running org.apache.hadoop.hdfs.web.oauth2.TestAccessTokenTimer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 sec - in 
org.apache.hadoop.hdfs.web.oauth2.TestAccessTokenTimer
Running 
org.apache.hadoop.hdfs.web.oauth2.TestRefreshTokenTimeBasedTokenRefresher
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.024 sec - in 
org.apache.hadoop.hdfs.web.oauth2.TestRefreshTokenTimeBasedTokenRefresher
Running org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.529 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Running org.apache.hadoop.hdfs.web.TestTokenAspect
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.452 sec - in 
org.apache.hadoop.hdfs.web.TestTokenAspect
Running org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.535 sec - in 
org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Running org.apache.hadoop.hdfs.web.TestWebHDFSOAuth2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.708 sec - in 
org.apache.hadoop.hdfs.web.TestWebHDFSOAuth2
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.418 sec - in 
org.apache.hadoop.hdfs.TestPeerCache
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitShm
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.449 sec - in 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitShm
Running org.apache.hadoop.hdfs.TestDefaultNameNodePort
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.6 sec - in 
org.apache.hadoop.hdfs.TestDefaultNameNodePort
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.52 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in 
org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec - in 
org.apache.hadoop.hdfs.TestDFSPacket

Results :

Tests run: 49, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs-client ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ 
hadoop-hdfs-client ---
[INFO] Building jar: 

[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs-client >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-client 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ 
hadoop-hdfs-client ---
[INFO] No changes detected in protoc files, skipping generation.
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs-client <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs-client ---
[INFO] Building jar: 

[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs-client >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-client 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ 
hadoop-hdfs-client ---
[INFO] No changes detected in protoc files, skipping generation.
[INFO]

Hadoop-Hdfs-trunk - Build # 3101 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3101/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 4072 lines...]
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . FAILURE [02:53 min]
[INFO] Apache Hadoop HDFS  SKIPPED
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.104 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:55 min
[INFO] Finished at: 2016-05-04T19:25:54+00:00
[INFO] Final Memory: 70M/1503M
[INFO] 
[ERROR] Failed to execute goal 
org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on project 
hadoop-hdfs-client: Execution default-cli of goal 
org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Java returned: 
134 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-10366) libhdfs++: Add SASL authentication

2016-05-04 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-10366:
-

 Summary: libhdfs++: Add SASL authentication
 Key: HDFS-10366
 URL: https://issues.apache.org/jira/browse/HDFS-10366
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen
Assignee: Bob Hansen


Enable communication with HDFS clusters that have KERBEROS authentication 
enabled; use tokens from NN when communicating with DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Hadoop-Hdfs-trunk-Java8 - Build # 1170 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1170/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6001 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:55 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:50 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.139 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:55 h
[INFO] Finished at: 2016-05-04T22:11:05+00:00
[INFO] Final Memory: 57M/724M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas

Error Message:

Expected: is 
 but: was 

Stack Trace:
java.lang.AssertionError: 
Expected: is 
 but: was 
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at org.junit.Assert.assertThat(Assert.java:832)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.ensureFileReplicasOnStorageType(LazyPersistTestCase.java:141)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:53)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs[1]

Error Message:
logging edit without syncing should do not affect txid expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: logging edit without syncing should do not affect 
txid expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBat

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1170

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[arp] HADOOP-13068. Clean up RunJar and related test class. (Contributed by

[jing9] HADOOP-12469. distcp should not ignore the ignoreFailures option.

[junping_du] YARN-4920. ATS/NM should support a link to dowload/get the logs in 
text

--
[...truncated 5808 lines...]
at 
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:881)
at 
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:730)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:725)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDisableConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.895 sec - in 
org.apache.hadoop.hdfs.TestDisableConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.288 sec - in 
org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.37 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteReadStripedFile
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 131.807 sec - 
in org.apache.hadoop.hdfs.TestWriteReadStripedFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.775 sec - in 
org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.26 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.582 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.224 sec - in 
org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.416 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.113 sec - in 
org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.197 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.757 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.443 sec - in 
org.apache.hadoop.hdfs.TestDFSPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.507 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.972 sec - in 
org.apache.hadoop.hdfs.TestParallelRead
J

Hadoop-Hdfs-trunk - Build # 3102 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3102/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5382 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:13 min]
[INFO] Apache Hadoop HDFS  FAILURE [54:21 min]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.102 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 58:36 min
[INFO] Finished at: 2016-05-04T23:13:47+00:00
[INFO] Final Memory: 58M/849M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume

Error Message:
Expected to find 
'DatanodeInfoWithStorage[127.0.0.1:42158,DS-288e3e6a-5e94-44ad-8b50-be8bbd5308ab,DISK]'
 but got unexpected exception:java.io.IOException: All datanodes 
[DatanodeInfoWithStorage[127.0.0.1:42158,DS-f3b02387-9e45-47bc-9a6f-f2523e5ff66c,DISK]]
 are bad. Aborting...
 at 
org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
 at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
 at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
 at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)


Stack Trace:
java.lang.AssertionError: Expected to find 
'DatanodeInfoWithStorage[127.0.0.1:42158,DS-288e3e6a-5e94-44ad-8b50-be8bbd5308ab,DISK]'
 but got unexpected exception:java.io.IOException: All datanodes 
[DatanodeInfoWithStorage[127.0.0.1:42158,DS-f3b02387-9e45-47bc-9a6f-f2523e5ff66c,DISK]]
 are bad. Aborting...
at 
org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendO

Build failed in Jenkins: Hadoop-Hdfs-trunk #3102

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[vinodkv] YARN-4905. Improved "yarn logs" command-line to optionally show log

--
[...truncated 5189 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.858 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.272 sec - in 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.948 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.454 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.856 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.937 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.905 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.4 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.394 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.691 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.369 sec - in 
org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.898 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.344 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.842 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.373 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.211 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.82 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.854 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.079 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.322 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.644 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.891 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.454 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.792 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.399 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.013 sec - 
in org.apache.hadoop

Hadoop-Hdfs-trunk - Build # 3103 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3103/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5379 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:59 min]
[INFO] Apache Hadoop HDFS  FAILURE [54:26 min]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.101 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 58:27 min
[INFO] Finished at: 2016-05-05T01:50:29+00:00
[INFO] Final Memory: 58M/728M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError 
-DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter6413635859381428720.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire6078564344705456044tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2232462118625898700212tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Build failed in Jenkins: Hadoop-Hdfs-trunk #3103

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[mingma] HDFS-10320. Rack failures may result in NN terminate. (Xiao Chen via

--
[...truncated 5186 lines...]
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Running org.apache.hadoop.hdfs.tools.TestGetConf
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.702 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.344 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.837 sec - in 
org.apache.hadoop.hdfs.TestSafeMode
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.367 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.951 sec - in 
org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.812 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.749 sec - in 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.977 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.646 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.787 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.992 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.568 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.423 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.663 sec - in 
org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.362 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.525 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.949 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.72 sec - in 
org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.5 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.503 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.486 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.725 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.289 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.363 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.438 sec - in 
org.apache.hadoop.hdf

[jira] [Resolved] (HDFS-7330) Unclosed RandomAccessFile warnings in FSDatasetIml.

2016-05-04 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-7330.
---
   Resolution: Duplicate
Fix Version/s: 2.7.0

> Unclosed RandomAccessFile warnings in FSDatasetIml.
> ---
>
> Key: HDFS-7330
> URL: https://issues.apache.org/jira/browse/HDFS-7330
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.1
>Reporter: Konstantin Shvachko
>Assignee: Andras Bokor
>  Labels: newbie
> Fix For: 2.7.0
>
>
> RandomAccessFile is opened as an underline file for FileInputStream. It 
> should be closed when the stream is closed. So to fix these 2 warning (in 
> getBlockInputStream() and getTmpInputStreams()) we just need suppress them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Hadoop-Hdfs-trunk-Java8 - Build # 1171 - Still Failing

2016-05-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1171/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 9127 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:14 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:46 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.063 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:50 h
[INFO] Finished at: 2016-05-05T02:36:59+00:00
[INFO] Final Memory: 62M/484M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.4:single (dist) on project 
hadoop-hdfs: Error reading assemblies: Error reading descriptor: hadoop-dist: 
invalid code lengths set -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1171

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[wangda] YARN-4984. LogAggregationService shouldn't swallow exception in 
handling

[vinodkv] YARN-4905. Improved "yarn logs" command-line to optionally show log

--
[...truncated 8934 lines...]
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[:
 finished (took 0s, not including scanning the source files).
  [javadoc] $1.class]]
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileO

[jira] [Created] (HDFS-10367) TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.

2016-05-04 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-10367:
---

 Summary: TestDFSShell.testMoveWithTargetPortEmpty fails with 
Address bind exception.
 Key: HDFS-10367
 URL: https://issues.apache.org/jira/browse/HDFS-10367
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula



{noformat}
Problem binding to [localhost:9820] java.net.BindException: Address already in 
use; For more details see:  http://wiki.apache.org/hadoop/BindException

Stack Trace:
java.net.BindException: Problem binding to [localhost:9820] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:530)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
at org.apache.hadoop.ipc.Server.(Server.java:2592)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:426)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:924)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:903)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:567)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org