Re: [VOTE] Merge HDFS-7285 (erasure coding) branch to trunk

2015-09-30 Thread Andrew Wang
Branch has been merged to trunk, thanks again to everyone who worked on the
feature!

On Tue, Sep 29, 2015 at 10:44 PM, Zhe Zhang  wrote:

> Thanks everyone who has participated in this discussion.
>
> With 7 +1's (5 binding and 2 non-binding), and no -1, this vote has passed.
> I will do a final 'git merge' with trunk and work with Andrew to merge the
> branch to trunk. I'll update on this thread when the merge is done.
>
> ---
> Zhe Zhang
>
> On Thu, Sep 24, 2015 at 11:08 PM, Liu, Yi A  wrote:
>
> > (Change it to binding.)
> >
> > +1
> > I have been involved in the development and code review on the feature
> > branch. It's a great feature and I think it's ready to merge it into
> trunk.
> >
> > Thanks all for the contribution.
> >
> > Regards,
> > Yi Liu
> >
> >
> > -Original Message-
> > From: Liu, Yi A
> > Sent: Friday, September 25, 2015 1:51 PM
> > To: hdfs-dev@hadoop.apache.org
> > Subject: RE: [VOTE] Merge HDFS-7285 (erasure coding) branch to trunk
> >
> > +1 (non-binding)
> > I have been involved in the development and code review on the feature
> > branch. It's a great feature and I think it's ready to merge it into
> trunk.
> >
> > Thanks all for the contribution.
> >
> > Regards,
> > Yi Liu
> >
> >
> > -Original Message-
> > From: Vinayakumar B [mailto:vinayakum...@apache.org]
> > Sent: Friday, September 25, 2015 12:21 PM
> > To: hdfs-dev@hadoop.apache.org
> > Subject: Re: [VOTE] Merge HDFS-7285 (erasure coding) branch to trunk
> >
> > +1,
> >
> > I've been involved starting from design and development of ErasureCoding.
> > I think phase 1 of this development is ready to be merged to trunk.
> > It had come a long way to the current state with significant effort of
> > many Contributors and Reviewers for both design and code.
> >
> > Thanks Everyone for the efforts.
> >
> > Regards,
> > Vinay
> >
> > On Wed, Sep 23, 2015 at 10:53 PM, Jing Zhao  wrote:
> >
> > > +1
> > >
> > > I've been involved in both development and review on the branch, and I
> > > believe it's now ready to get merged into trunk. Many thanks to all
> > > the contributors and reviewers!
> > >
> > > Thanks,
> > > -Jing
> > >
> > > On Tue, Sep 22, 2015 at 6:17 PM, Zheng, Kai 
> wrote:
> > >
> > > > Non-binding +1
> > > >
> > > > According to our extensive performance tests, striping + ISA-L coder
> > > based
> > > > erasure coding not only can save storage, but also can increase the
> > > > throughput of a client or a cluster. It will be a great addition to
> > > > HDFS and its users. Based on the latest branch codes, we also
> > > > observed it's
> > > very
> > > > reliable in the concurrent tests. We'll provide the perf test report
> > > after
> > > > it's sorted out and hope it helps.
> > > > Thanks!
> > > >
> > > > Regards,
> > > > Kai
> > > >
> > > > -Original Message-
> > > > From: Gangumalla, Uma [mailto:uma.ganguma...@intel.com]
> > > > Sent: Wednesday, September 23, 2015 8:50 AM
> > > > To: hdfs-dev@hadoop.apache.org; common-...@hadoop.apache.org
> > > > Subject: Re: [VOTE] Merge HDFS-7285 (erasure coding) branch to trunk
> > > >
> > > > +1
> > > >
> > > > Great addition to HDFS. Thanks all contributors for the nice work.
> > > >
> > > > Regards,
> > > > Uma
> > > >
> > > > On 9/22/15, 3:40 PM, "Zhe Zhang"  wrote:
> > > >
> > > > >Hi,
> > > > >
> > > > >I'd like to propose a vote to merge the HDFS-7285 feature branch
> > > > >back to trunk. Since November 2014 we have been designing and
> > > > >developing this feature under the umbrella JIRAs HDFS-7285 and
> > > > >HADOOP-11264, and have committed approximately 210 patches.
> > > > >
> > > > >The HDFS-7285 feature branch was created to support the first phase
> > > > >of HDFS erasure coding (HDFS-EC). The objective of HDFS-EC is to
> > > > >significantly reduce storage space usage in HDFS clusters. Instead
> > > > >of always creating 3 replicas of each block with 200% storage space
> > > > >overhead, HDFS-EC provides data durability through parity data
> blocks.
> > > > >With most EC configurations, the storage overhead is no more than
> 50%.
> > > > >Based on profiling results of production clusters, we decided to
> > > > >support EC with the striped block layout in the first phase, so
> > > > >that small files can be better handled. This means dividing each
> > > > >logical HDFS file block into smaller units (striping cells) and
> > > > >spreading them on a set of DataNodes in round-robin fashion. Parity
> > > > >cells are generated for each stripe of original data cells. We have
> > > > >made changes to NameNode, client, and DataNode to generalize the
> > > > >block concept and handle the mapping between a logical file block
> > > > >and its internal storage blocks. For further details please see the
> > > > >design doc on HDFS-7285.
> > > > >HADOOP-11264 focuses on providing flexible and high-performance
> > > > >codec calculation support.
> > > > >
> > > > >The nightly Jenkins job of the branch has reported several
> > > > >successful runs

[jira] [Resolved] (HDFS-7285) Erasure Coding Support inside HDFS

2015-09-30 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-7285.
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0

I just did a final {{git merge}} to sync with trunk and [~andrew.wang] helped 
push the HDFS-7285 branch to trunk. Resolving this JIRA now; let's keep working 
on follow-on tasks under HDFS-8031.

Thanks very much for all contributors to EC phase I, as well as the helpful 
discussions in the community.

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Fix For: 3.0.0
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> HDFS-7285-Consolidated-20150911.patch, HDFS-7285-initial-PoC.patch, 
> HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2379 - Failure

2015-09-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2379/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5956 lines...]
Updating HDFS-8228
Updating HDFS-8033
Updating HADOOP-11514
Updating HDFS-9040
Updating HDFS-8744
Updating HDFS-8223
Updating HDFS-8220
Updating HDFS-7949
Updating HDFS-8550
Updating HADOOP-11818
Updating HDFS-8602
Updating HDFS-8366
Updating HDFS-8363
Updating HDFS-8364
Updating HADOOP-12065
Updating HADOOP-12060
Updating HDFS-7839
Updating HADOOP-11707
Updating HDFS-8167
Updating HADOOP-11706
Updating HDFS-8166
Updating HDFS-7837
Updating HADOOP-11705
Updating HADOOP-11782
Updating HDFS-8024
Updating HDFS-8023
Updating HDFS-8543
Updating HDFS-8216
Updating HDFS-8408
Updating HDFS-7937
Updating HDFS-7936
Updating HDFS-8212
Updating HDFS-8005
Updating HDFS-8619
Updating HDFS-8563
Updating HDFS-8975
Updating HDFS-8352
Updating HDFS-8978
Updating HDFS-8355
Updating HDFS-8183
Updating HDFS-8418
Updating HDFS-8186
Updating HDFS-8417
Updating HDFS-7369
Updating HDFS-8188
Updating HDFS-8189
Updating HDFS-8156
Updating HDFS-7749
Updating HDFS-8557
Updating HDFS-8882
Updating HDFS-8556
Updating HDFS-8760
Updating HDFS-8559
Updating HDFS-8203
Updating HDFS-8202
Updating HDFS-8181
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2379

2015-09-30 Thread Apache Jenkins Server
See 

Changes:

[zhezhang] HDFS-7347. Configurable erasure coding policy for individual files 
and

[zhezhang] HDFS-7339. Allocating and persisting block groups in NameNode.

[zhezhang] HDFS-7652. Process block reports for erasure coded blocks. 
Contributed

[zhezhang] Fix Compilation Error in TestAddBlockgroup.java after the merge

[zhezhang] HADOOP-11514. Raw Erasure Coder API for concrete encoding and 
decoding

[zhezhang] HADOOP-11534. Minor improvements for raw erasure coders ( 
Contributed by

[zhezhang] HADOOP-11541. Raw XOR coder

[zhezhang] HDFS-7716. Erasure Coding: extend BlockInfo to handle EC info.

[zhezhang] HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai

[zhezhang] HDFS-7749. Erasure Coding: Add striped block support in INodeFile.

[zhezhang] Addendum fix for HDFS-7749 to be compatible with HDFS-7993

[zhezhang] HDFS-7837. Erasure Coding: allocate and persist striped blocks in

[zhezhang] HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by 
Kai

[zhezhang] HDFS-7872. Erasure Coding: INodeFile.dumpTreeRecursively() supports 
to

[zhezhang] HADOOP-11646. Erasure Coder API for encoding and decoding of block 
group

[zhezhang] HDFS-7853. Erasure coding: extend LocatedBlocks to support reading 
from

[zhezhang] HADOOP-11705. Make erasure coder configurable. Contributed by Kai 
Zheng

[zhezhang] Fixed a compiling issue introduced by HADOOP-11705.

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts when merging with

[zhezhang] HDFS-7826. Erasure Coding: Update INodeFile quota computation for

[zhezhang] HDFS-7912. Erasure Coding: track BlockInfo instead of Block in

[zhezhang] HADOOP-11706 Refine a little bit erasure coder API

[zhezhang] Updated CHANGES-HDFS-EC-7285.txt accordingly

[zhezhang] HDFS-7369. Erasure coding: distribute recovery work for striped 
blocks

[zhezhang] HADOOP-11707. Add factory to create raw erasure coder.  Contributed 
by

[zhezhang] HADOOP-11647. Reed-Solomon ErasureCoder. Contributed by Kai Zheng

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts when merging with

[zhezhang] HDFS-7864. Erasure Coding: Update safemode calculation for striped

[zhezhang] HDFS-7827. Erasure Coding: support striped blocks in non-protobuf

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts when merging with

[zhezhang] HDFS-7716. Add a test for BlockGroup support in FSImage.  
Contributed by

[zhezhang] HADOOP-11664. Loading predefined EC schemas from configuration.

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts in the branch when

[zhezhang] HDFS-7907. Erasure Coding: track invalid, corrupt, and under-recovery

[zhezhang] HDFS-8005. Erasure Coding: simplify striped block recovery work

[zhezhang] HDFS-8027. Erasure Coding: Update CHANGES-HDFS-7285.txt with branch

[zhezhang] HDFS-7617. Add unit tests for editlog transactions for EC. 
Contributed

[zhezhang] HADOOP-11782 Correct two thrown messages in ECSchema class. 
Contributed

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts in the branch when

[zhezhang] HDFS-7839. Erasure coding: implement facilities in NameNode to create

[zhezhang] HADOOP-11740. Combine erasure encoder and decoder interfaces.

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts in the branch when

[zhezhang] HDFS-7969. Erasure coding: NameNode support for lease recovery of

[zhezhang] HADOOP-11805 Better to rename some raw erasure coders. Contributed by

[zhezhang] Updated CHANGES-HDFS-EC-7285.txt

[zhezhang] HADOOP-11782 Correct two thrown messages in ECSchema class. 
Contributed

[zhezhang] HADOOP-11740. Combine erasure encoder and decoder interfaces.

[zhezhang] HADOOP-11645. Erasure Codec API covering the essential aspects for an

[zhezhang] HDFS-7782. Erasure coding: pread from files in striped layout.

[zhezhang] HDFS-7782. Erasure coding: pread from files in striped layout.

[zhezhang] HDFS-8023. Erasure Coding: retrieve eraure coding schema for a file 
from

[zhezhang] HDFS-8023. Erasure Coding: retrieve eraure coding schema for a file 
from

[zhezhang] HDFS-8074 Define a system-wide default EC schema. Contributed by Kai

[zhezhang] HDFS-8104 Make hard-coded values consistent with the system default

[zhezhang] HADOOP-11818 Minor improvements for erasurecode classes. Contributed 
by

[zhezhang] HDFS-8077. Erasure coding: fix bugs in EC zone and symlinks. 
Contributed

[zhezhang] HDFS-7889 Subclass DFSOutputStream to support writing striping layout

[zhezhang] HDFS-8090. Erasure Coding: Add RPC to client-namenode to list all

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts in the branch when

[zhezhang] HDFS-8122. Erasure Coding: Support specifying ECSchema during 
creation

[zhezhang] HDFS-8114. Erasure coding: Add auditlog

[zhezhang] HDFS-8123. Erasure Coding: Better to move EC related proto messages 
to a

[zhezhang] HDFS-8027. Erasure Coding: Update CHANGES-HDFS-7285.txt with branch

[zhezhang] H

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #439

2015-09-30 Thread Apache Jenkins Server
See 

Changes:

[zhezhang] HDFS-7347. Configurable erasure coding policy for individual files 
and

[zhezhang] HDFS-7339. Allocating and persisting block groups in NameNode.

[zhezhang] HDFS-7652. Process block reports for erasure coded blocks. 
Contributed

[zhezhang] Fix Compilation Error in TestAddBlockgroup.java after the merge

[zhezhang] HADOOP-11514. Raw Erasure Coder API for concrete encoding and 
decoding

[zhezhang] HADOOP-11534. Minor improvements for raw erasure coders ( 
Contributed by

[zhezhang] HADOOP-11541. Raw XOR coder

[zhezhang] HDFS-7716. Erasure Coding: extend BlockInfo to handle EC info.

[zhezhang] HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai

[zhezhang] HDFS-7749. Erasure Coding: Add striped block support in INodeFile.

[zhezhang] Addendum fix for HDFS-7749 to be compatible with HDFS-7993

[zhezhang] HDFS-7837. Erasure Coding: allocate and persist striped blocks in

[zhezhang] HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by 
Kai

[zhezhang] HDFS-7872. Erasure Coding: INodeFile.dumpTreeRecursively() supports 
to

[zhezhang] HADOOP-11646. Erasure Coder API for encoding and decoding of block 
group

[zhezhang] HDFS-7853. Erasure coding: extend LocatedBlocks to support reading 
from

[zhezhang] HADOOP-11705. Make erasure coder configurable. Contributed by Kai 
Zheng

[zhezhang] Fixed a compiling issue introduced by HADOOP-11705.

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts when merging with

[zhezhang] HDFS-7826. Erasure Coding: Update INodeFile quota computation for

[zhezhang] HDFS-7912. Erasure Coding: track BlockInfo instead of Block in

[zhezhang] HADOOP-11706 Refine a little bit erasure coder API

[zhezhang] Updated CHANGES-HDFS-EC-7285.txt accordingly

[zhezhang] HDFS-7369. Erasure coding: distribute recovery work for striped 
blocks

[zhezhang] HADOOP-11707. Add factory to create raw erasure coder.  Contributed 
by

[zhezhang] HADOOP-11647. Reed-Solomon ErasureCoder. Contributed by Kai Zheng

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts when merging with

[zhezhang] HDFS-7864. Erasure Coding: Update safemode calculation for striped

[zhezhang] HDFS-7827. Erasure Coding: support striped blocks in non-protobuf

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts when merging with

[zhezhang] HDFS-7716. Add a test for BlockGroup support in FSImage.  
Contributed by

[zhezhang] HADOOP-11664. Loading predefined EC schemas from configuration.

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts in the branch when

[zhezhang] HDFS-7907. Erasure Coding: track invalid, corrupt, and under-recovery

[zhezhang] HDFS-8005. Erasure Coding: simplify striped block recovery work

[zhezhang] HDFS-8027. Erasure Coding: Update CHANGES-HDFS-7285.txt with branch

[zhezhang] HDFS-7617. Add unit tests for editlog transactions for EC. 
Contributed

[zhezhang] HADOOP-11782 Correct two thrown messages in ECSchema class. 
Contributed

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts in the branch when

[zhezhang] HDFS-7839. Erasure coding: implement facilities in NameNode to create

[zhezhang] HADOOP-11740. Combine erasure encoder and decoder interfaces.

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts in the branch when

[zhezhang] HDFS-7969. Erasure coding: NameNode support for lease recovery of

[zhezhang] HADOOP-11805 Better to rename some raw erasure coders. Contributed by

[zhezhang] Updated CHANGES-HDFS-EC-7285.txt

[zhezhang] HADOOP-11782 Correct two thrown messages in ECSchema class. 
Contributed

[zhezhang] HADOOP-11740. Combine erasure encoder and decoder interfaces.

[zhezhang] HADOOP-11645. Erasure Codec API covering the essential aspects for an

[zhezhang] HDFS-7782. Erasure coding: pread from files in striped layout.

[zhezhang] HDFS-7782. Erasure coding: pread from files in striped layout.

[zhezhang] HDFS-8023. Erasure Coding: retrieve eraure coding schema for a file 
from

[zhezhang] HDFS-8023. Erasure Coding: retrieve eraure coding schema for a file 
from

[zhezhang] HDFS-8074 Define a system-wide default EC schema. Contributed by Kai

[zhezhang] HDFS-8104 Make hard-coded values consistent with the system default

[zhezhang] HADOOP-11818 Minor improvements for erasurecode classes. Contributed 
by

[zhezhang] HDFS-8077. Erasure coding: fix bugs in EC zone and symlinks. 
Contributed

[zhezhang] HDFS-7889 Subclass DFSOutputStream to support writing striping layout

[zhezhang] HDFS-8090. Erasure Coding: Add RPC to client-namenode to list all

[zhezhang] HDFS-7936. Erasure coding: resolving conflicts in the branch when

[zhezhang] HDFS-8122. Erasure Coding: Support specifying ECSchema during 
creation

[zhezhang] HDFS-8114. Erasure coding: Add auditlog

[zhezhang] HDFS-8123. Erasure Coding: Better to move EC related proto messages 
to a

[zhezhang] HDFS-8027. Erasure Coding: Update CHANGES-HDFS-7285.txt with branch

[zhezha

Hadoop-Hdfs-trunk-Java8 - Build # 439 - Still Failing

2015-09-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/439/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7483 lines...]
Updating HDFS-8228
Updating HDFS-8033
Updating HADOOP-11514
Updating HDFS-9040
Updating HDFS-8744
Updating HDFS-8223
Updating HDFS-8220
Updating HDFS-7949
Updating HDFS-8550
Updating HADOOP-11818
Updating HDFS-8602
Updating HDFS-8366
Updating HDFS-8363
Updating HDFS-8364
Updating HADOOP-12065
Updating HADOOP-12060
Updating HDFS-7839
Updating HADOOP-11707
Updating HDFS-8167
Updating HADOOP-11706
Updating HDFS-8166
Updating HDFS-7837
Updating HADOOP-11705
Updating HADOOP-11782
Updating HDFS-8024
Updating HDFS-8023
Updating HDFS-8543
Updating HDFS-8216
Updating HDFS-8408
Updating HDFS-7937
Updating HDFS-7936
Updating HDFS-8212
Updating HDFS-8005
Updating HDFS-8619
Updating HDFS-8563
Updating HDFS-8975
Updating HDFS-8352
Updating HDFS-8978
Updating HDFS-8355
Updating HDFS-8183
Updating HDFS-8418
Updating HDFS-8186
Updating HDFS-8417
Updating HDFS-7369
Updating HDFS-8188
Updating HDFS-8189
Updating HDFS-8156
Updating HDFS-7749
Updating HDFS-8557
Updating HDFS-8882
Updating HDFS-8556
Updating HDFS-8760
Updating HDFS-8559
Updating HDFS-8203
Updating HDFS-8202
Updating HDFS-8181
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
14 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverAnyBlocks1

Error Message:
Time out waiting for EC block recovery.

Stack Trace:
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:283)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverAnyBlocks1(TestRecoverStripedFile.java:168)


FAILED:  org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneDataBlock

Error Message:
Time out waiting for EC block recovery.

Stack Trace:
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:283)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneDataBlock(TestRecoverStripedFile.java:144)


FAILED:  
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverThreeParityBlocks

Error Message:
Time out waiting for EC block recovery.

Stack Trace:
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:283)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverThreeParityBlocks(TestRecoverStripedFile.java:126)


FAILED:  org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneParityBlock

Error Message:
Time out waiting for EC block recovery.

Stack Trace:
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:283)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneParityBlock(TestRecoverStripedFile.java:102)


FAILED:  
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverThreeDataBlocks1

Error Message:
Time out waiting for EC block recovery.

Stack Trace:
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:283)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverThreeDataBlocks1(TestRecoverStripedFile.java:138)


FAILED:  org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneDataBlock1

Error Message:
Time out waiting for EC block recovery.

Stack Trace:
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:283)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneDataBlock

[jira] [Created] (HDFS-9178) Slow datanode I/O can cause a wrong node to be marked bad

2015-09-30 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-9178:


 Summary: Slow datanode I/O can cause a wrong node to be marked bad
 Key: HDFS-9178
 URL: https://issues.apache.org/jira/browse/HDFS-9178
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Priority: Critical


When non-leaf datanode in a pipeline is slow on or stuck at disk I/O, the 
downstream node can timeout on reading packet since even the heartbeat packets 
will not be relayed down.  

The packet read timeout is set in {{DataXceiver#run()}}:

{code}
  peer.setReadTimeout(dnConf.socketTimeout);
{code}

When the downstream node times out and closes the connection to the upstream, 
the upstream node's {{PacketResponder}} gets {{EOFException}} and it sends an 
ack upstream with the downstream node status set to {{ERROR}}.  This caused the 
client to exclude the downstream node, even thought the upstream node was the 
one got stuck.

The connection to downstream has longer timeout, so the downstream will always 
timeout  first. The downstream timeout is set in {{writeBlock()}}
{code}
  int timeoutValue = dnConf.socketTimeout +
  (HdfsConstants.READ_TIMEOUT_EXTENSION * targets.length);
  int writeTimeout = dnConf.socketWriteTimeout +
  (HdfsConstants.WRITE_TIMEOUT_EXTENSION * targets.length);
  NetUtils.connect(mirrorSock, mirrorTarget, timeoutValue);
  OutputStream unbufMirrorOut = NetUtils.getOutputStream(mirrorSock,
  writeTimeout);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9179) fs.defaultFS should not be used on the server side

2015-09-30 Thread Daniel Templeton (JIRA)
Daniel Templeton created HDFS-9179:
--

 Summary: fs.defaultFS should not be used on the server side
 Key: HDFS-9179
 URL: https://issues.apache.org/jira/browse/HDFS-9179
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.1
Reporter: Daniel Templeton
Assignee: Daniel Templeton


Currently the namenode will bind to the address given by defaultFS if no 
rpc-address is given.  That behavior is an evolutionary artifact and should be 
removed.  Instead, the rpc-address should be a required setting for the server 
side configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9180) Update excluded DataNodes in DFSStripedOutputStream based on failures in data streamers

2015-09-30 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-9180:
---

 Summary: Update excluded DataNodes in DFSStripedOutputStream based 
on failures in data streamers
 Key: HDFS-9180
 URL: https://issues.apache.org/jira/browse/HDFS-9180
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao


This is a TODO in HDFS-9040: based on the failures all the striped data 
streamers hit, the DFSStripedOutputStream should keep a record of all the 
DataNodes that should be excluded.

This jira will also fix several bugs in the DFSStripedOutputStream. Will 
provide more details in the comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-09-30 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9181:
-

 Summary: Better handling of exceptions thrown during upgrade 
shutdown
 Key: HDFS-9181
 URL: https://issues.apache.org/jira/browse/HDFS-9181
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
better if the exception is handled in some way.

One way to handle it is by emitting a warning message. There could exist other 
ways to handle it. This lira is created to discuss how to handle this case 
better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-09-30 Thread Yi Liu (JIRA)
Yi Liu created HDFS-9182:


 Summary: Cleanup the findbugs and other issues after HDFS EC 
merged to trunk.
 Key: HDFS-9182
 URL: https://issues.apache.org/jira/browse/HDFS-9182
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Critical


https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html

https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9183) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-09-30 Thread Yi Liu (JIRA)
Yi Liu created HDFS-9183:


 Summary: Cleanup the findbugs and other issues after HDFS EC 
merged to trunk.
 Key: HDFS-9183
 URL: https://issues.apache.org/jira/browse/HDFS-9183
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Critical


https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html

https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9183) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-09-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu resolved HDFS-9183.
--
Resolution: Duplicate
  Assignee: (was: Yi Liu)

> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9183
> URL: https://issues.apache.org/jira/browse/HDFS-9183
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Priority: Critical
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9184) Logging HDFS operation's caller context into audit logs

2015-09-30 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-9184:
---

 Summary: Logging HDFS operation's caller context into audit logs
 Key: HDFS-9184
 URL: https://issues.apache.org/jira/browse/HDFS-9184
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Mingliang Liu
Assignee: Mingliang Liu


For a given HDFS operation (e.g. delete file), it's very helpful to track which 
upper level job issues it. The upper level callers may be specific Oozie tasks, 
MR jobs, and hive queries. One scenario is that the namenode (NN) is 
abused/spammed, the operator may want to know immediately which MR job should 
be blamed so that she can kill it. To this end, the caller context contains at 
least the application-dependent "tracking id".

There are several existing techniques that may be related to this problem.
1. Currently the HDFS audit log tracks the users of the the operation which is 
obviously not enough. It's common that the same user issues multiple jobs at 
the same time. Even for a single top level task, tracking back to a specific 
caller in a chain of operations of the whole workflow (e.g.Oozie -> Hive -> 
Yarn) is hard, if not impossible.
2. HDFS integrated {{htrace}} support for providing tracing information across 
multiple layers. The span is created in many places interconnected like a tree 
structure which relies on offline analysis across RPC boundary. For this use 
case, {{htrace}} has to be enabled at 100% sampling rate which introduces 
significant overhead. Moreover, passing additional information (via 
annotations) other than span id from root of the tree to leaf is a significant 
additional work.
3. In [HDFS-4680 | https://issues.apache.org/jira/browse/HDFS-4680], there are 
some related discussion on this topic. The final patch implemented the tracking 
id as a part of delegation token. This protects the tracking information from 
being changed or impersonated. However, kerberos authenticated connections or 
insecure connections don't have tokens. [HADOOP-8779] proposes to use tokens in 
all the scenarios, but that might mean changes to several upstream projects and 
is a major change in their security implementation.

We propose another approach to address this problem. We also treat HDFS audit 
log as a good place for after-the-fact root cause analysis. We propose to put 
the caller id (e.g. Hive query id) in threadlocals. Specially, on client side 
the threadlocal object is passed to NN as a part of RPC header (optional), 
while on sever side NN retrieves it from header and put it to {{Handler}}'s 
threadlocals. Finally in {{FSNamesystem}}, HDFS audit logger will record the 
caller context for each operation. In this way, the existing code is not 
affected.

It is still challenging to keep "lying" client from abusing the caller context. 
Our proposal is to add a {{signature}} field to the caller context. The client 
choose to provide its signature along with the caller id. The operator may need 
to validate the signature at the time of offline analysis. The NN is not 
responsible for validating the signature online.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2380 - Still Failing

2015-09-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2380/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7633 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:03 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:19 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.073 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:23 h
[INFO] Finished at: 2015-10-01T03:13:46+00:00
[INFO] Final Memory: 55M/698M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-9175
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
18 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverAnyBlocks1

Error Message:
Time out waiting for EC block recovery.

Stack Trace:
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:283)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverAnyBlocks1(TestRecoverStripedFile.java:168)


FAILED:  org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneDataBlock

Error Message:
Time out waiting for EC block recovery.

Stack Trace:
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:283)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneDataBlock(TestRecoverStripedFile.java:144)


FAILED:  
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverThreeParityBlocks

Error Message:
Time out waiting for EC block recovery.

Stack Trace:
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apach

Build failed in Jenkins: Hadoop-Hdfs-trunk #2380

2015-09-30 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HDFS-9175. Change scope of 'AccessTokenProvider.getAccessToken()' and

--
[...truncated 7440 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.405 sec - in 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.757 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.083 sec - in 
org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.005 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.625 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.769 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.851 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.1 sec - in 
org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.161 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.698 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.292 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.337 sec - in 
org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.525 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.28 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.714 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.683 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 137.373 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.704 sec - 
in org.apache.hadoop.hdfs.TestBlockReaderLocal
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.541 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.784 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 246.353 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.266 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.243 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.733 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.667 sec - 
in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.491 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestDisableConnCache
Tests run: 1, Failures: 0, Errors: 0

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #440

2015-09-30 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HDFS-9175. Change scope of 'AccessTokenProvider.getAccessToken()' and

--
[...truncated 7973 lines...]
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.31 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec - in 
org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.264 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.527 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.159 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.021 sec - in 
org.apache.hadoop.hdfs.TestErasureCodingPolicies
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.871 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.323 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.211 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.815 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.912 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.487 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.737 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.15 sec - in 
org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.274 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.622 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit S

Hadoop-Hdfs-trunk-Java8 - Build # 440 - Still Failing

2015-09-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/440/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8166 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:30 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:01 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.485 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:05 h
[INFO] Finished at: 2015-10-01T03:53:53+00:00
[INFO] Final Memory: 55M/480M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-9175
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
13 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 6
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 2
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 6
  done: false
] expected:<0> but was:<3>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! 
Current writer state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 6
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 2
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 6
 done: false
] expected:<0> but was:<3>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)


FAILED:  org.apache.hadoop.hdfs.TestRecoverStri

Hadoop-Hdfs-trunk-Java8 - Build # 441 - Still Failing

2015-09-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/441/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7106 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:20 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:24 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.070 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:29 h
[INFO] Finished at: 2015-10-01T05:30:22+00:00
[INFO] Final Memory: 72M/1098M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter210337576650574002.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire4650527963384819210tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_833950732762319267086tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating MAPREDUCE-6494
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-2
  target length: 50
  current item: 5
  done: false
] expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! 
Current writer state:[Circular Writer:
 directory: /test-2
 target length: 50
 current item: 5
 done: false
] expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEqua

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #441

2015-09-30 Thread Apache Jenkins Server
See 

Changes:

[rkanter] MAPREDUCE-6494. Permission issue when running archive-logs tool as

--
[...truncated 6913 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.214 sec - 
in org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.133 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.841 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.977 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.545 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.927 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.594 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.057 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.072 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.165 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.451 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySummary
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.202 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySummary
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageStorageInspector
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.414 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSImageStorageInspector
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.358 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager
Java HotSpot(TM) 64-Bit Server VM

[jira] [Created] (HDFS-9185) TestRecoverStripedFile is failing

2015-09-30 Thread Rakesh R (JIRA)
Rakesh R created HDFS-9185:
--

 Summary: TestRecoverStripedFile is failing
 Key: HDFS-9185
 URL: https://issues.apache.org/jira/browse/HDFS-9185
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Critical


Below is the message taken from build:
{code}
Error Message

Time out waiting for EC block recovery.
Stacktrace

java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:383)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:283)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverAnyBlocks1(TestRecoverStripedFile.java:168)
{code}

Reference : https://builds.apache.org/job/PreCommit-HDFS-Build/12758



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)