Build failed in Jenkins: Hadoop-Hdfs-trunk #2988

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[rohithsharmaks] YARN-4607. Pagination support for AppAttempt page

--
[...truncated 5236 lines...]
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.314 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.193 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.838 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.548 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.329 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.983 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.72 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.399 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.361 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.793 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.04 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.899 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.329 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.726 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.964 sec - 
in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.345 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.51 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.208 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.818 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.089 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.65 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.024 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.83 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.91 sec - in 
org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.521 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.756 sec - in 
org.apache.hadoop

Hadoop-Hdfs-trunk - Build # 2988 - Still Failing

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2988/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5429 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:12 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:25 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.080 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:29 h
[INFO] Finished at: 2016-04-04T07:16:24+00:00
[INFO] Final Memory: 56M/585M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:33229,DS-592ed55c-8b53-498d-b309-bda6410ee839,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:37354,DS-2adc497a-6c76-492f-aa3f-0fdcbca2ab0f,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:33229,DS-592ed55c-8b53-498d-b309-bda6410ee839,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:37354,DS-2adc497a-6c76-492f-aa3f-0fdcbca2ab0f,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:33229,DS-592ed55c-8b53-498d-b309-bda6410ee839,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:37354,DS-2adc497a-6c76-492f-aa3f-0fdcbca2ab0f,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:33229,DS-592ed55c-8b53-498d-b309-bda6410ee839,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:37354,DS-2adc497a-6c76-492f-aa3f-0fdcbca2ab0f,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.re

Re: 2.7.3 release plan

2016-04-04 Thread Tsz Wo Sze
> ... I volunteer to help on releasing 2.8.0 as 2.7.3 + HDFS-8578 and HDFS-8791.

HDFS-8578 (parallel upgrade), which does not change layout, should remain in 
2.7.3.  Only HDFS-8791 (new layout) is needed to be put in 2.8.0.
Tsz-Wo
 

On Monday, April 4, 2016 2:51 AM, Haohui Mai  wrote:
 
 

 +1 on option 3. I volunteer to help on releasing 2.8.0 as 2.7.3 +
HDFS-8578 and HDFS-8791.

~Haohui

On Fri, Apr 1, 2016 at 2:54 PM, Chris Trezzo  wrote:
> A few thoughts:
>
> 1. To echo Andrew Wang, HDFS-8578 (parallel upgrades) should be a
> prerequisite for HDFS-8791. Without that patch, upgrades can be very slow
> for data nodes depending on your setup.
>
> 2. We have already deployed this patch internally so, with my Twitter hat
> on, I would be perfectly happy as long as it makes it into trunk and 2.8.
> That being said, I would be hesitant to deploy the current 2.7.x or 2.6.x
> releases on a large production cluster that has a diverse set of block ids
> without this patch, especially if your data nodes have a large number of
> disks or you are using federation. To be clear though: this highly depends
> on your setup and at a minimum you should verify that this regression will
> not affect you. The current block-id based layout in 2.6.x and 2.7.2 has a
> performance regression that gets worse over time. When you see it happening
> on a live cluster, it is one of the harder issues to identify a root cause
> and debug. I do understand that this is currently only affecting a smaller
> number of users, but I also think this number has potential to increase as
> time goes on. Maybe we can issue a warning in the release notes for future
> 2.7.x and 2.6.x releases?
>
> 3. One option (this was suggested on HDFS-8791 and I think Sean alluded to
> this proposal on this thread) would be to cut a 2.8 release off of the
> 2.7.3 release with the new layout. What people currently think of as 2.8
> would then become 2.9. This would give customers a stable release that they
> could deploy with the new layout and would not break upgrade and downgrade
> expectations.
>
> On Fri, Apr 1, 2016 at 11:32 AM, Andrew Purtell  wrote:
>
>> As a downstream consumer of Apache Hadoop 2.7.x releases, I expect we would
>> patch the release to revert HDFS-8791 before pushing it out to production.
>> For what it's worth.
>>
>>
>> On Fri, Apr 1, 2016 at 11:23 AM, Andrew Wang 
>> wrote:
>>
>> > One other thing I wanted to bring up regarding HDFS-8791, we haven't
>> > backported the parallel DN upgrade improvement (HDFS-8578) to branch-2.6.
>> > HDFS-8578 is a very important related fix since otherwise upgrade will be
>> > very slow.
>> >
>> > On Thu, Mar 31, 2016 at 10:35 AM, Andrew Wang 
>> > wrote:
>> >
>> > > As I expressed on HDFS-8791, I do not want to include this JIRA in a
>> > > maintenance release. I've only seen it crop up on a handful of our
>> > > customer's clusters, and large users like Twitter and Yahoo that seem
>> to
>> > be
>> > > more affected are also the most able to patch this change in
>> themselves.
>> > >
>> > > Layout upgrades are quite disruptive, and I don't think it's worth
>> > > breaking upgrade and downgrade expectations when it doesn't affect the
>> > (in
>> > > my experience) vast majority of users.
>> > >
>> > > Vinod seemed to have a similar opinion in his comment on HDFS-8791, but
>> > > will let him elaborate.
>> > >
>> > > Best,
>> > > Andrew
>> > >
>> > > On Thu, Mar 31, 2016 at 9:11 AM, Sean Busbey 
>> > wrote:
>> > >
>> > >> As of 2 days ago, there were already 135 jiras associated with 2.7.3,
>> > >> if *any* of them end up introducing a regression the inclusion of
>> > >> HDFS-8791 means that folks will have cluster downtime in order to back
>> > >> things out. If that happens to any substantial number of downstream
>> > >> folks, or any particularly vocal downstream folks, then it is very
>> > >> likely we'll lose the remaining trust of operators for rolling out
>> > >> maintenance releases. That's a pretty steep cost.
>> > >>
>> > >> Please do not include HDFS-8791 in any 2.6.z release. Folks having to
>> > >> be aware that an upgrade from e.g. 2.6.5 to 2.7.2 will fail is an
>> > >> unreasonable burden.
>> > >>
>> > >> I agree that this fix is important, I just think we should either cut
>> > >> a version of 2.8 that includes it or find a way to do it that gives an
>> > >> operational path for rolling downgrade.
>> > >>
>> > >> On Thu, Mar 31, 2016 at 10:10 AM, Junping Du 
>> > wrote:
>> > >> > Thanks for bringing up this topic, Sean.
>> > >> > When I released our latest Hadoop release 2.6.4, the patch of
>> > HDFS-8791
>> > >> haven't been committed in so that's why we didn't discuss this
>> earlier.
>> > >> > I remember in JIRA discussion, we treated this layout change as a
>> > >> Blocker bug that fixing a significant performance regression before
>> but
>> > not
>> > >> a normal performance improvement. And I believe HDFS community already
>> > did
>> > >> their best with careful and patient to

[jira] [Created] (HDFS-10256) Use GenericTestUtils.getTestDir method in tests for temporary directories

2016-04-04 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-10256:


 Summary: Use GenericTestUtils.getTestDir method in tests for 
temporary directories
 Key: HDFS-10256
 URL: https://issues.apache.org/jira/browse/HDFS-10256
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build, test
Reporter: Vinayakumar B
Assignee: Vinayakumar B






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1059

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-12967. Remove FileUtil#copyMerge. Contributed by Brahma Reddy

--
[...truncated 5710 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.985 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.408 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.01 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.511 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.079 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.809 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.293 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.67 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.003 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDataTransferProtocol
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.352 sec - in 
org.apache.hadoop.hdfs.TestDataTransferProtocol
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.293 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.725 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.371 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.017 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.366 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 19, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 175.495 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Java HotSpot(TM) 64-Bit Server VM warn

Hadoop-Hdfs-trunk-Java8 - Build # 1059 - Failure

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1059/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5903 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:14 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:09 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.096 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:13 h
[INFO] Finished at: 2016-04-04T13:59:46+00:00
[INFO] Final Memory: 68M/460M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: 
org.apache.maven.surefire.booter.SurefireBooterForkException: Error occurred in 
starting fork, check output in log -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
The stream is closed

Stack Trace:
java.io.IOException: The stream is closed
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at 
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:877)
at 
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:726)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:721)




Hadoop-Hdfs-trunk - Build # 2989 - Still Failing

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2989/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5468 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:34 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:18 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.064 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:22 h
[INFO] Finished at: 2016-04-04T14:10:24+00:00
[INFO] Final Memory: 57M/539M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testRemoveVolumeBeingWritten

Error Message:
test timed out after 3 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 3 milliseconds
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:236)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testRemoveVolumeBeingWritten(TestFsDatasetImpl.java:629)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs[1]

Error Message:
logging edit without syncing should do not affect txid expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: logging edit without syncing should do not affect 
txid expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.As

Build failed in Jenkins: Hadoop-Hdfs-trunk #2989

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-12967. Remove FileUtil#copyMerge. Contributed by Brahma Reddy

--
[...truncated 5275 lines...]
Running org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 150.68 sec - in 
org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.203 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.045 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.346 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.418 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.307 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.534 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.449 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.234 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.353 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.198 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 83.058 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.464 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.415 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.691 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.538 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.677 sec - 
in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.163 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.994 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.098 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.035 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.141 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.934 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.707 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.79 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests

Hadoop-Hdfs-trunk-Java8 - Build # 1060 - Still Failing

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1060/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 9384 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:08 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:56 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.130 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:00 h
[INFO] Finished at: 2016-04-04T18:03:32+00:00
[INFO] Final Memory: 70M/706M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt

Error Message:
test timed out after 5 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 5 milliseconds
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:764)
at 
org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:689)
at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:770)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:747)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at 
org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:136)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.testPageRounder

Error Message:
Timed out waiting for condition. Thread diagnostics:
Timestamp: 2016-04-04 03:06:53,056

"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:142)
at java.l

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1060

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[naganarasimha_gr] YARN-4746. yarn web services should convert parse failures 
of appId,

--
[...truncated 9191 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.059 sec - in 
org.apache.hadoop.cli.TestDeleteCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.715 sec - in 
org.apache.hadoop.cli.TestCacheAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.739 sec - in 
org.apache.hadoop.cli.TestAclCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.174 sec - in 
org.apache.hadoop.cli.TestHDFSCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.133 sec - in 
org.apache.hadoop.cli.TestCryptoAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.442 sec - in 
org.apache.hadoop.cli.TestXAttrCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.967 sec - in 
org.apache.hadoop.cli.TestErasureCodingCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.438 sec - in 
org.apache.hadoop.tools.TestJMXGet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.437 sec - in 
org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.565 sec - in 
org.apache.hadoop.tools.TestHdfsConfigFields
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.317 sec - in 
org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.411 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.548 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.848 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.053 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.102 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.203 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM w

Hadoop-Hdfs-trunk - Build # 2990 - Still Failing

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2990/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5239 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:25 min]
[INFO] Apache Hadoop HDFS  FAILURE [  05:53 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.097 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 05:58 h
[INFO] Finished at: 2016-04-04T20:15:38+00:00
[INFO] Final Memory: 77M/588M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.IOException: Stream Closed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestBlockStoragePolicy.testChangeColdRep

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.TestBlockStoragePolicy.checkLocatedBlocks(TestBlockStoragePolicy.java:1099)
at 
org.apache.hadoop.hdfs.TestBlockStoragePolicy.testChangeFileRep(TestBlockStoragePolicy.java:1128)
at 
org.apache.hadoop.hdfs.TestBlockStoragePolicy.testChangeColdRep(TestBlockStoragePolicy.java:1197)


FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1895)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback(TestRollingUpgrade.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2990

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[naganarasimha_gr] YARN-4746. yarn web services should convert parse failures 
of appId,

--
[...truncated 5046 lines...]
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.02 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.891 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Running org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.703 sec - 
in org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.978 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.883 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Running org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.501 sec - 
in org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.857 sec - in 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.075 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.086 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.38 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.752 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.248 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.678 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.359 sec - in 
org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.863 sec - in 
org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.73 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.164 sec - in 
org.apache.hadoop.hdfs.TestBlockMissingException
Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.78 sec - in 
org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.951 sec - in 
org.apache.hadoop.hdfs.TestPersistBlocks
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 15.379 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.127 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.794 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.065 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 211.881 sec - 
in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Running 

Re: 2.7.3 release plan

2016-04-04 Thread Vinod Kumar Vavilapalli
I commented on the JIRA way back (see 
https://issues.apache.org/jira/browse/HDFS-8791?focusedCommentId=1503&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-1503),
 saying what I said below. Unfortunately, I haven’t followed the patch along 
after my initial comment. 

This isn’t about any specific release - starting 2.6 we declared support for 
rolling upgrades and downgrades. Any patch that breaks this should not be in 
branch-2.

Two options from where I stand
 (1) For folks who worked on the patch: Is there a way to make (a) the 
upgrade-downgrade seamless for people who don’t care about this (b) and have 
explicit documentation for people who care to switch this behavior on and are 
willing to risk not having downgrades. If this means a new configuration 
property, so be it. It’s a necessary evil.
 (2) Just let specific users backport this into specific 2.x branches they need 
and leave it only on trunk.

Unless this behavior stops breaking rolling upgrades/downgrades, I think we 
should just revert it from branch-2 and definitely 2.7.3 as it stands today.

+Vinod


> On Apr 1, 2016, at 2:54 PM, Chris Trezzo  wrote:
> 
> A few thoughts:
> 
> 1. To echo Andrew Wang, HDFS-8578 (parallel upgrades) should be a
> prerequisite for HDFS-8791. Without that patch, upgrades can be very slow
> for data nodes depending on your setup.
> 
> 2. We have already deployed this patch internally so, with my Twitter hat
> on, I would be perfectly happy as long as it makes it into trunk and 2.8.
> That being said, I would be hesitant to deploy the current 2.7.x or 2.6.x
> releases on a large production cluster that has a diverse set of block ids
> without this patch, especially if your data nodes have a large number of
> disks or you are using federation. To be clear though: this highly depends
> on your setup and at a minimum you should verify that this regression will
> not affect you. The current block-id based layout in 2.6.x and 2.7.2 has a
> performance regression that gets worse over time. When you see it happening
> on a live cluster, it is one of the harder issues to identify a root cause
> and debug. I do understand that this is currently only affecting a smaller
> number of users, but I also think this number has potential to increase as
> time goes on. Maybe we can issue a warning in the release notes for future
> 2.7.x and 2.6.x releases?
> 
> 3. One option (this was suggested on HDFS-8791 and I think Sean alluded to
> this proposal on this thread) would be to cut a 2.8 release off of the
> 2.7.3 release with the new layout. What people currently think of as 2.8
> would then become 2.9. This would give customers a stable release that they
> could deploy with the new layout and would not break upgrade and downgrade
> expectations.
> 
> On Fri, Apr 1, 2016 at 11:32 AM, Andrew Purtell  wrote:
> 
>> As a downstream consumer of Apache Hadoop 2.7.x releases, I expect we would
>> patch the release to revert HDFS-8791 before pushing it out to production.
>> For what it's worth.
>> 
>> 
>> On Fri, Apr 1, 2016 at 11:23 AM, Andrew Wang 
>> wrote:
>> 
>>> One other thing I wanted to bring up regarding HDFS-8791, we haven't
>>> backported the parallel DN upgrade improvement (HDFS-8578) to branch-2.6.
>>> HDFS-8578 is a very important related fix since otherwise upgrade will be
>>> very slow.
>>> 
>>> On Thu, Mar 31, 2016 at 10:35 AM, Andrew Wang 
>>> wrote:
>>> 
 As I expressed on HDFS-8791, I do not want to include this JIRA in a
 maintenance release. I've only seen it crop up on a handful of our
 customer's clusters, and large users like Twitter and Yahoo that seem
>> to
>>> be
 more affected are also the most able to patch this change in
>> themselves.
 
 Layout upgrades are quite disruptive, and I don't think it's worth
 breaking upgrade and downgrade expectations when it doesn't affect the
>>> (in
 my experience) vast majority of users.
 
 Vinod seemed to have a similar opinion in his comment on HDFS-8791, but
 will let him elaborate.
 
 Best,
 Andrew
 
 On Thu, Mar 31, 2016 at 9:11 AM, Sean Busbey 
>>> wrote:
 
> As of 2 days ago, there were already 135 jiras associated with 2.7.3,
> if *any* of them end up introducing a regression the inclusion of
> HDFS-8791 means that folks will have cluster downtime in order to back
> things out. If that happens to any substantial number of downstream
> folks, or any particularly vocal downstream folks, then it is very
> likely we'll lose the remaining trust of operators for rolling out
> maintenance releases. That's a pretty steep cost.
> 
> Please do not include HDFS-8791 in any 2.6.z release. Folks having to
> be aware that an upgrade from e.g. 2.6.5 to 2.7.2 will fail is an
> unreasonable burden.
> 
> I agree that this fix is important, I just think we should either cut
> a version of 2.8 that includes it or find a w

Re: [Release thread] 2.8.0 release activities

2016-04-04 Thread Vinod Kumar Vavilapalli
Here we go again. The blocker / critical tickets ballooned up a lot, I see 64 
now! : https://issues.apache.org/jira/issues/?filter=12334985

Also, the docs target (mvn package -Pdocs -DskipTests) is completely busted on 
branch-2, I figured I have to backport a whole bunch of patches that are only 
on trunk, and may be more fixes on top of that *sigh*

I’ll start pushing for progress for an RC in a week or two.

Any help towards this, reviewing/committing outstanding patches and 
contributing to open items is greatly appreciated.

Thanks
+Vinod

> On Feb 9, 2016, at 11:51 AM, Vinod Kumar Vavilapalli  
> wrote:
> 
> Sure. The last time I checked, there were 20 odd blocker/critical tickets too 
> that’ll need some of my time.
> 
> Given that, if you can get them in before a week, we should be good.
> 
> +Vinod
> 
>> On Feb 5, 2016, at 1:19 PM, Subramaniam V K  wrote:
>> 
>> Vinod,
>> 
>> Thanks for initiating the 2.8 release thread. We are in late review stages
>> for YARN-4420 (Add REST API for listing reservations) and YARN-2575 (Adding
>> ACLs for reservation system), hoping to get them by next week. Any chance
>> you can put off cutting 2.8 by a week as we are planning to deploy
>> ReservationSystem and these are critical for that?
>> 
>> Cheers,
>> Subru
>> 
>> On Thu, Feb 4, 2016 at 3:17 PM, Chris Nauroth 
>> wrote:
>> 
>>> FYI, I've just needed to raise HDFS-9761 to blocker status for the 2.8.0
>>> release.
>>> 
>>> --Chris Nauroth
>>> 
>>> 
>>> 
>>> 
>>> On 2/3/16, 6:19 PM, "Karthik Kambatla"  wrote:
>>> 
 Thanks Vinod. Not labeling 2.8.0 stable sounds perfectly reasonable to me.
 Let us not call it alpha or beta though, it is quite confusing. :)
 
 On Wed, Feb 3, 2016 at 8:17 PM, Gangumalla, Uma >>> 
 wrote:
 
> Thanks Vinod. +1 for 2.8 release start.
> 
> Regards,
> Uma
> 
> On 2/3/16, 3:53 PM, "Vinod Kumar Vavilapalli" 
> wrote:
> 
>> Seems like all the features listed in the Roadmap wiki are in. I¹m
> going
>> to try cutting an RC this weekend for a first/non-stable release off of
>> branch-2.8.
>> 
>> Let me know if anyone has any objections/concerns.
>> 
>> Thanks
>> +Vinod
>> 
>>> On Nov 25, 2015, at 5:59 PM, Vinod Kumar Vavilapalli
>>>  wrote:
>>> 
>>> Branch-2.8 is created.
>>> 
>>> As mentioned before, the goal on branch-2.8 is to put improvements /
>>> fixes to existing features with a goal of converging on an alpha
> release
>>> soon.
>>> 
>>> Thanks
>>> +Vinod
>>> 
>>> 
 On Nov 25, 2015, at 5:30 PM, Vinod Kumar Vavilapalli
  wrote:
 
 Forking threads now in order to track all things related to the
 release.
 
 Creating the branch now.
 
 Thanks
 +Vinod
 
 
> On Nov 25, 2015, at 11:37 AM, Vinod Kumar Vavilapalli
>  wrote:
> 
> I think we¹ve converged at a high level w.r.t 2.8. And as I just
> sent
> out an email, I updated the Roadmap wiki reflecting the same:
> https://wiki.apache.org/hadoop/Roadmap
> 
> 
> I plan to create a 2.8 branch EOD today.
> 
> The goal for all of us should be to restrict improvements & fixes
> to
> only (a) the feature-set documented under 2.8 in the RoadMap wiki
> and
> (b) other minor features that are already in 2.8.
> 
> Thanks
> +Vinod
> 
> 
>> On Nov 11, 2015, at 12:13 PM, Vinod Kumar Vavilapalli
>> mailto:vino...@hortonworks.com>> wrote:
>> 
>> - Cut a branch about two weeks from now
>> - Do an RC mid next month (leaving ~4weeks since branch-cut)
>> - As with 2.7.x series, the first release will still be called as
>> early / alpha release in the interest of
>> ‹ gaining downstream adoption
>> ‹ wider testing,
>> ‹ yet reserving our right to fix any inadvertent
> incompatibilities
>> introduced.
> 
 
>>> 
>> 
> 
> 
>>> 
>>> 
> 



Hadoop-Hdfs-trunk-Java8 - Build # 1061 - Still Failing

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1061/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6259 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:03 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:15 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.119 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:19 h
[INFO] Finished at: 2016-04-04T22:25:51+00:00
[INFO] Final Memory: 56M/478M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
18 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
The stream is closed

Stack Trace:
java.io.IOException: The stream is closed
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at 
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:877)
at 
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:726)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:721)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas

Error Message:

Expected: is 
 but: was 

Stack Trace:
java.lang.AssertionError: 
Expected: is 
 but: was 
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at org.junit.Assert.assertThat(Assert.java:832)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.ensureFileReplicasOnStorageType(LazyPersistT

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1061

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-12169 ListStatus on empty dir in S3A lists itself instead of

[arp] HADOOP-11212. NetUtils.wrapException to handle SocketException

--
[...truncated 6066 lines...]

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.548 sec - 
in org.apache.hadoop.hdfs.TestErasureCodingPolicies
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.486 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.093 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.865 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.736 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 135.412 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.513 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.806 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.154 sec - in 
org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.242 sec - in 
org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.268 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.211 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec - in 
org.apache.hadoop.hdfs.protocol.TestAnnotations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.986 sec - in 
org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.554 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailur

Re: 2.7.3 release plan

2016-04-04 Thread Colin McCabe
I agree that HDFS-8578 should be a prerequisite for backporting
HDFS-8791.

I think we are overestimating the number of people affected by
HDFS-8791, and underestimating the disruption that would be caused by a
layout version upgrade in a dot release.  As Andrew, Sean, and others in
the thread pointed out, this could reduce people's trust in the
stabilization branches.

The maintenance branches were never intended to live forever. 
Eventually, people should start using newer releases.  A more efficient
DataNode layout is just one more motivation to upgrade.

best,
Colin


On Fri, Apr 1, 2016, at 14:54, Chris Trezzo wrote:
> A few thoughts:
> 
> 1. To echo Andrew Wang, HDFS-8578 (parallel upgrades) should be a
> prerequisite for HDFS-8791. Without that patch, upgrades can be very slow
> for data nodes depending on your setup.
> 
> 2. We have already deployed this patch internally so, with my Twitter hat
> on, I would be perfectly happy as long as it makes it into trunk and 2.8.
> That being said, I would be hesitant to deploy the current 2.7.x or 2.6.x
> releases on a large production cluster that has a diverse set of block
> ids
> without this patch, especially if your data nodes have a large number of
> disks or you are using federation. To be clear though: this highly
> depends
> on your setup and at a minimum you should verify that this regression
> will
> not affect you. The current block-id based layout in 2.6.x and 2.7.2 has
> a
> performance regression that gets worse over time. When you see it
> happening
> on a live cluster, it is one of the harder issues to identify a root
> cause
> and debug. I do understand that this is currently only affecting a
> smaller
> number of users, but I also think this number has potential to increase
> as
> time goes on. Maybe we can issue a warning in the release notes for
> future
> 2.7.x and 2.6.x releases?
> 
> 3. One option (this was suggested on HDFS-8791 and I think Sean alluded
> to
> this proposal on this thread) would be to cut a 2.8 release off of the
> 2.7.3 release with the new layout. What people currently think of as 2.8
> would then become 2.9. This would give customers a stable release that
> they
> could deploy with the new layout and would not break upgrade and
> downgrade
> expectations.
> 
> On Fri, Apr 1, 2016 at 11:32 AM, Andrew Purtell 
> wrote:
> 
> > As a downstream consumer of Apache Hadoop 2.7.x releases, I expect we would
> > patch the release to revert HDFS-8791 before pushing it out to production.
> > For what it's worth.
> >
> >
> > On Fri, Apr 1, 2016 at 11:23 AM, Andrew Wang 
> > wrote:
> >
> > > One other thing I wanted to bring up regarding HDFS-8791, we haven't
> > > backported the parallel DN upgrade improvement (HDFS-8578) to branch-2.6.
> > > HDFS-8578 is a very important related fix since otherwise upgrade will be
> > > very slow.
> > >
> > > On Thu, Mar 31, 2016 at 10:35 AM, Andrew Wang 
> > > wrote:
> > >
> > > > As I expressed on HDFS-8791, I do not want to include this JIRA in a
> > > > maintenance release. I've only seen it crop up on a handful of our
> > > > customer's clusters, and large users like Twitter and Yahoo that seem
> > to
> > > be
> > > > more affected are also the most able to patch this change in
> > themselves.
> > > >
> > > > Layout upgrades are quite disruptive, and I don't think it's worth
> > > > breaking upgrade and downgrade expectations when it doesn't affect the
> > > (in
> > > > my experience) vast majority of users.
> > > >
> > > > Vinod seemed to have a similar opinion in his comment on HDFS-8791, but
> > > > will let him elaborate.
> > > >
> > > > Best,
> > > > Andrew
> > > >
> > > > On Thu, Mar 31, 2016 at 9:11 AM, Sean Busbey 
> > > wrote:
> > > >
> > > >> As of 2 days ago, there were already 135 jiras associated with 2.7.3,
> > > >> if *any* of them end up introducing a regression the inclusion of
> > > >> HDFS-8791 means that folks will have cluster downtime in order to back
> > > >> things out. If that happens to any substantial number of downstream
> > > >> folks, or any particularly vocal downstream folks, then it is very
> > > >> likely we'll lose the remaining trust of operators for rolling out
> > > >> maintenance releases. That's a pretty steep cost.
> > > >>
> > > >> Please do not include HDFS-8791 in any 2.6.z release. Folks having to
> > > >> be aware that an upgrade from e.g. 2.6.5 to 2.7.2 will fail is an
> > > >> unreasonable burden.
> > > >>
> > > >> I agree that this fix is important, I just think we should either cut
> > > >> a version of 2.8 that includes it or find a way to do it that gives an
> > > >> operational path for rolling downgrade.
> > > >>
> > > >> On Thu, Mar 31, 2016 at 10:10 AM, Junping Du 
> > > wrote:
> > > >> > Thanks for bringing up this topic, Sean.
> > > >> > When I released our latest Hadoop release 2.6.4, the patch of
> > > HDFS-8791
> > > >> haven't been committed in so that's why we didn't discuss this
> > earlier.
> > > >> > I remember in JIRA dis

Hadoop-Hdfs-trunk - Build # 2991 - Still Failing

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2991/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5385 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:05 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:21 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.068 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:26 h
[INFO] Finished at: 2016-04-05T00:45:33+00:00
[INFO] Final Memory: 56M/569M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:653)


FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1895)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
at 
org.apache.hadoop.hdfs.MiniD

Build failed in Jenkins: Hadoop-Hdfs-trunk #2991

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-12169 ListStatus on empty dir in S3A lists itself instead of

[arp] HADOOP-11212. NetUtils.wrapException to handle SocketException

[iwasakims] HDFS-9599. TestDecommissioningStatus.testDecommissionStatus 
occasionally

--
[...truncated 5192 lines...]
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.424 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.159 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.084 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.093 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.552 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.323 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.666 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.033 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.596 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.328 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.418 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.117 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 115.692 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.053 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.924 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.986 sec - 
in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.672 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.129 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.436 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.644 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.916 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.705 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.295 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.497 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1062

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[iwasakims] HDFS-9599. TestDecommissioningStatus.testDecommissionStatus 
occasionally

[kihwal] HDFS-10178. Permanent write failures can happen if pipeline recoveries

[gtcarrera9] YARN-4706. UI Hosting Configuration in TimelineServer doc is 
broken.

--
[...truncated 5780 lines...]
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.39 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 12.739 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.501 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.076 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.422 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.648 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.363 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.439 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.203 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.246 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.733 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.641 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.299 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.608 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 16.464 sec - 
in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests 

Hadoop-Hdfs-trunk-Java8 - Build # 1062 - Still Failing

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1062/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5973 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:08 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:35 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.264 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:39 h
[INFO] Finished at: 2016-04-05T02:09:49+00:00
[INFO] Final Memory: 56M/654M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume(TestFsDatasetImpl.java:683)




[jira] [Created] (HDFS-10257) Quick Thread Local Storage set-up has a small flaw

2016-04-04 Thread Stephen Bovy (JIRA)
Stephen Bovy created HDFS-10257:
---

 Summary: Quick Thread Local Storage set-up has a small flaw
 Key: HDFS-10257
 URL: https://issues.apache.org/jira/browse/HDFS-10257
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.6.4
 Environment: Linux 
Reporter: Stephen Bovy
Priority: Minor


In   jni_helper.c   in the   getJNIEnvfunction 

The “THREAD_LOCAL_STORAGE_SET_QUICK(env);”   Macro   is   in the  wrong 
location;   

It should precede   the  “threadLocalStorageSet(env)”   as follows ::  

THREAD_LOCAL_STORAGE_SET_QUICK(env);

if (threadLocalStorageSet(env)) {
  return NULL;
}

AND IN   “thread_local_storage.h”   the macro:   
“THREAD_LOCAL_STORAGE_SET_QUICK”
should be as follows :: 

#ifdef HAVE_BETTER_TLS
  #define THREAD_LOCAL_STORAGE_GET_QUICK() \
static __thread JNIEnv *quickTlsEnv = NULL; \
{ \
  if (quickTlsEnv) { \
return quickTlsEnv; \
  } \
}

  #define THREAD_LOCAL_STORAGE_SET_QUICK(env) \
{ \
  quickTlsEnv = (env); \
  return env;
}
#else
  #define THREAD_LOCAL_STORAGE_GET_QUICK()
  #define THREAD_LOCAL_STORAGE_SET_QUICK(env)
#endif






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2992 - Still Failing

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2992/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5365 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:13 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:35 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.157 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:40 h
[INFO] Finished at: 2016-04-05T05:32:30+00:00
[INFO] Final Memory: 57M/578M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume(TestFsDatasetImpl.java:683)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testConcurrentRead

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testConcurrentRead(TestLazyPersistFiles.java:223)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs[1]

Error Message:
logging edit without syncing should do not affect txid expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: logging edit without syncing should do not affect 
txid expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)

Build failed in Jenkins: Hadoop-Hdfs-trunk #2992

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HDFS-10178. Permanent write failures can happen if pipeline recoveries

[gtcarrera9] YARN-4706. UI Hosting Configuration in TimelineServer doc is 
broken.

[cmccabe] HADOOP-12959. Add additional github web site for ISA-L library (Li Bo

--
[...truncated 5172 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.596 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 138.486 sec - 
in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.75 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.174 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.193 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.781 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.262 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.36 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.514 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.282 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.263 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.167 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.709 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.33 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.44 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.544 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.573 sec - 
in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.242 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.719 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.375 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.582 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.078 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.508 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.223 sec - in 
org.apac

[jira] [Created] (HDFS-10258) Erasure Coding: support small cluster whose #DataNode < # (Blocks in a BlockGroup)

2016-04-04 Thread Li Bo (JIRA)
Li Bo created HDFS-10258:


 Summary: Erasure Coding: support small cluster whose #DataNode < # 
(Blocks in a BlockGroup)
 Key: HDFS-10258
 URL: https://issues.apache.org/jira/browse/HDFS-10258
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 1063 - Still Failing

2016-04-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1063/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6700 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:07 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:15 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.073 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:19 h
[INFO] Finished at: 2016-04-05T06:32:52+00:00
[INFO] Final Memory: 56M/444M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  
org.apache.hadoop.fs.TestSymlinkHdfsFileContext.org.apache.hadoop.fs.TestSymlinkHdfsFileContext

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: 
org/apache/hadoop/security/authentication/client/ConnectionConfigurator
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initialize(WebHdfsFileSystem.java:189)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2800)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2837)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2819)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:381)
at 
org.apache.hadoop.hdfs.web.WebHdfsTestUtil.getWebHdfsFileSystem(WebHdfsTestUtil.java:61)
at 
org.apache.hadoop.fs.TestSymlinkHdfs.beforeClassSetup(TestSymlinkHdfs.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.junit.runners.model.Framework

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1063

2016-04-04 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HADOOP-12959. Add additional github web site for ISA-L library (Li Bo

[cmccabe] HDFS-8496. Calling stopWriter() with FSDatasetImpl lock held may block

[vinayakumarb] HDFS-9917. IBR accumulate more objects when SNN was down for 
sometime.

--
[...truncated 6507 lines...]
java.io.IOException: The stream is closed
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at 
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:877)
at 
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:726)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:721)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.442 sec - 
in org.apache.hadoop.hdfs.TestErasureCodingPolicies
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.472 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.856 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.657 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.217 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.291 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.415 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.747 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.338 sec - in 
org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec - in 
org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.313 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.235 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.4 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.h