Hadoop-Hdfs-trunk-Java8 - Build # 269 - Still Failing

2015-08-07 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/269/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7389 lines...]
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:09 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:02 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.055 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:06 h
[INFO] Finished at: 2015-08-07T13:43:50+00:00
[INFO] Final Memory: 78M/890M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter514359078820222302.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire2315396774313644126tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2056761235941036579881tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4348907 bytes
Compression is 0.0%
Took 3.4 sec
Recording test results
Updating MAPREDUCE-6257
Updating HDFS-8856
Updating HDFS-8499
Updating HDFS-8623
Updating MAPREDUCE-6443
Updating YARN-3974
Updating YARN-3948
Updating YARN-4019
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 33
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 29
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 16
  done: false
] expected:<0> but was:<3>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! 
Current writer state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 33
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 29
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 16
 done: false
] expected:<0> but was:<3>
at org.junit.Assert.fail(Assert.java

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #269

2015-08-07 Thread Apache Jenkins Server
See 

Changes:

[junping_du] YARN-4019. Add JvmPauseMonitor to ResourceManager and NodeManager. 
Contributed by Robert Kanter.

[junping_du] MAPREDUCE-6443. Add JvmPauseMonitor to JobHistoryServer. 
Contributed by Robert Kanter.

[aw] MAPREDUCE-6257. Document encrypted spills (Bibin A Chundatt via aw)

[jing9] Revert "HDFS-8623. Refactor NameNode handling of invalid, corrupt, and 
under-recovery blocks. Contributed by Zhe Zhang."

[jing9] Revert "HDFS-8499. Refactor BlockInfo class hierarchy with static 
helper class. Contributed by Zhe Zhang."

[Carlo Curino] YARN-3974. Refactor the reservation system test cases to use 
parameterized base test. (subru via curino)

[arp] HDFS-8856. Make LeaseManager#countPath O(1). (Contributed by Arpit 
Agarwal)

[rohithsharmaks] YARN-3948. Display Application Priority in RM Web UI.(Sunil G 
via rohithsharmaks)

--
[...truncated 7196 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.496 sec - in 
org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgressMetrics
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgress
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.348 sec - in 
org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgress
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.11 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestXAttrConfigFlag
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.246 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestXAttrConfigFlag
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.578 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.702 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestDeleteRace
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.21 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestDeleteRace
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.205 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.986 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeOptionParsing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.291 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeOptionParsing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditsDoubleBuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestEditsDoubleBuffer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestPathComponents
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.192 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestPathComponents
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.311 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Java HotSpot(TM) 64-Bit

Build failed in Jenkins: Hadoop-Hdfs-trunk #2207

2015-08-07 Thread Apache Jenkins Server
See 

Changes:

[junping_du] YARN-4019. Add JvmPauseMonitor to ResourceManager and NodeManager. 
Contributed by Robert Kanter.

[junping_du] MAPREDUCE-6443. Add JvmPauseMonitor to JobHistoryServer. 
Contributed by Robert Kanter.

[aw] MAPREDUCE-6257. Document encrypted spills (Bibin A Chundatt via aw)

[jing9] Revert "HDFS-8623. Refactor NameNode handling of invalid, corrupt, and 
under-recovery blocks. Contributed by Zhe Zhang."

[jing9] Revert "HDFS-8499. Refactor BlockInfo class hierarchy with static 
helper class. Contributed by Zhe Zhang."

[Carlo Curino] YARN-3974. Refactor the reservation system test cases to use 
parameterized base test. (subru via curino)

[arp] HDFS-8856. Make LeaseManager#countPath O(1). (Contributed by Arpit 
Agarwal)

[rohithsharmaks] YARN-3948. Display Application Priority in RM Web UI.(Sunil G 
via rohithsharmaks)

--
[...truncated 7351 lines...]
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.387 sec <<< 
FAILURE! - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations  Time elapsed: 4.387 
sec  <<< ERROR!
org.apache.hadoop.fs.UnsupportedFileSystemException: 
fs.AbstractFileSystem.webhdfs.impl=null: No AbstractFileSystem configured for 
scheme: webhdfs
at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:161)
at 
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:325)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
at 
org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:322)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:439)
at 
org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations.clusterSetupAtBeginning(TestWebHdfsFileContextMainOperations.java:76)

Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.231 sec - in 
org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.779 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.138 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.673 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.855 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.165 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.918 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.264 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.4 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.95 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.073 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.179 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.138 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.462 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 

Hadoop-Hdfs-trunk - Build # 2207 - Still Failing

2015-08-07 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2207/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7544 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:24 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:54 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.061 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:58 h
[INFO] Finished at: 2015-08-07T14:33:22+00:00
[INFO] Final Memory: 66M/769M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2199
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 3861697 bytes
Compression is 0.0%
Took 13 sec
Recording test results
Updating MAPREDUCE-6257
Updating HDFS-8856
Updating HDFS-8499
Updating HDFS-8623
Updating MAPREDUCE-6443
Updating YARN-3974
Updating YARN-3948
Updating YARN-4019
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
16 tests failed.
REGRESSION:  org.apache.hadoop.fs.TestGlobPaths.testLocalFilesystem

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.fs.TestGlobPaths.testLocalFilesystem(TestGlobPaths.java:1309)


FAILED:  
org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations.org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations

Error Message:
fs.AbstractFileSystem.swebhdfs.impl=null: No AbstractFileSystem configured for 
scheme: swebhdfs

Stack Trace:
org.apache.hadoop.fs.UnsupportedFileSystemException: 
fs.AbstractFileSystem.swebhdfs.impl=null: No AbstractFileSystem configured for 
scheme: swebhdfs
at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:161)
at 
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:325)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
at 
org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:322)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:439)
at 
org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations.clusterSetupAtBeginning(TestSWebHdfsFileContextMainOperations.java:89)


FAILED:  
org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations.org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations


[jira] [Created] (HDFS-8873) throttle directoryScanner

2015-08-07 Thread Nathan Roberts (JIRA)
Nathan Roberts created HDFS-8873:


 Summary: throttle directoryScanner
 Key: HDFS-8873
 URL: https://issues.apache.org/jira/browse/HDFS-8873
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.1
Reporter: Nathan Roberts


The new 2-level directory layout can make directory scans expensive in terms of 
disk seeks (see HDFS-8791) for details. 

It would be good if the directoryScanner() had a configurable duty cycle that 
would reduce its impact on disk performance (much like the approach in 
HDFS-8617). 

Without such a throttle, disks can go 100% busy for many minutes at a time 
(assuming the common case of all inodes in cache but no directory blocks 
cached, 64K seeks are required for full directory listing which translates to 
655 seconds) 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8874) Add DN metrics for balancer and other block movement scenarios

2015-08-07 Thread Ming Ma (JIRA)
Ming Ma created HDFS-8874:
-

 Summary: Add DN metrics for balancer and other block movement 
scenarios
 Key: HDFS-8874
 URL: https://issues.apache.org/jira/browse/HDFS-8874
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Chris Trezzo


For balancer, mover and migrator (HDFS-8789), we want to know how close it is 
to the DN's throttling thresholds. Although DN has existing metrics such as 
{{BytesWritten}}, {{BytesRead}}, {{CopyBlockOpNumOps}} and 
{{ReplaceBlockOpNumOps}}, there is no metrics to indicate the number of bytes 
moved.

We can add {{ReplaceBlockBytesWritten}} and {{CopyBlockBytesRead}} to account 
for the bytes moved in ReplaceBlock and CopyBlock operations. In addition, we 
can also add throttling metrics for {{DataTransferThrottler}} and 
{{BlockBalanceThrottler}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8875) Optimize the wait time in Balancer for federation scenario

2015-08-07 Thread Ming Ma (JIRA)
Ming Ma created HDFS-8875:
-

 Summary: Optimize the wait time in Balancer for federation scenario
 Key: HDFS-8875
 URL: https://issues.apache.org/jira/browse/HDFS-8875
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Chris Trezzo


Balancer has wait time between two consecutive iterations. That is to give some 
time for block movement to be fully committed ( return from replaceBlock 
doesn't mean the NN's blockmap has been updated and the block has been 
invalidated on the source node.).

This wait time could be 23 seconds if {{dfs.heartbeat.interval}} is set to 10 
and {{dfs.namenode.replication.interval}} is to 3. In the case of federation, 
given we iterate through all namespaces in each iteration, this wait time 
becomes unnecessary as while balancer is processing the next namespace, it 
gives the previous namespace it just finished time to commit.

In addition, Balancer calls {{Collections.shuffle(connectors);}} It doesn't 
seem necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8876) Make hard coded parameters used by balancer and other tools configurable

2015-08-07 Thread Ming Ma (JIRA)
Ming Ma created HDFS-8876:
-

 Summary: Make hard coded parameters used by balancer and other 
tools configurable
 Key: HDFS-8876
 URL: https://issues.apache.org/jira/browse/HDFS-8876
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Chris Trezzo


During investigation of how to speed up balancer, at least to the level 
specified by {{dfs.datanode.balance.bandwidthPerSec}}, we found that parameters 
such as {{MAX_BLOCKS_SIZE_TO_FETCH}} and {{SOURCE_BLOCKS_MIN_SIZE}} are hard 
coded. These parameters are related to block size and other configurable 
parameters used by balancer. So least we should make it configurable. In the 
longer term, it might be interesting to understand if we simplify all these 
related configurations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8877) Allow bypassing some minor exceptions while loading editlog

2015-08-07 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-8877:
---

 Summary: Allow bypassing some minor exceptions while loading 
editlog
 Key: HDFS-8877
 URL: https://issues.apache.org/jira/browse/HDFS-8877
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao


Usually when users hit editlog corruption like HDFS-7587, before upgrading to a 
new version with the bug fix, a customized jar has to be provided by developers 
first to bypass the exception while loading edits. The whole process is usually 
painful.

If we can confirm the corruption/exception is a known issue and can be ignored 
after upgrading to the newer version, it may be helpful to have the capability 
for users/developers to specify certain types/numbers of exceptions that can be 
bypassed while loading edits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8878) An HDFS built-in DistCp

2015-08-07 Thread Linxiao Jin (JIRA)
Linxiao Jin created HDFS-8878:
-

 Summary: An HDFS built-in DistCp 
 Key: HDFS-8878
 URL: https://issues.apache.org/jira/browse/HDFS-8878
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Linxiao Jin
Assignee: Linxiao Jin


For now, we use DistCp to do directory copy, which works quite good. However, 
it would be better if there is an HDFS built-in, efficient, directory copy 
tool. It could be faster by cut off the redundant communication between HDFS, 
YARN and MapReduce. It could also release the resource DistCp consumed in job 
tracker and YARN and easier for debugging.

We need more discussion on the new protocol between NN and DN from different 
clusters to achieve HDFS-level command sending and data transfer. One available 
hacky solution could be, the srcNN get the block distribution of the target 
file, ask each datanode to start a DFSClient and copy their local 
shortcircuited block as a file in dst cluster. After all the block-file in dst 
cluster is completed, use a DFSClient to concat them together to form the 
target destination file. There might be some optimized solution by implement a 
newly designed protocol to communicate over cluster rather than DFSClient and 
use methods from lower bottom layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Jenkins : Unable to create new native thread

2015-08-07 Thread Ted Yu
I observed the same behavior in hbase QA run as well:
https://builds.apache.org/job/PreCommit-HBASE-Build/15000/console

This was on ubuntu-2 
Looks like certain machines may have environment issue.

FYI

On Wed, Aug 5, 2015 at 12:59 AM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

> Dear All
>
> had seen following error (OOM) in HDFS-1148 and Hadoop-12302..jenkin
> machine have some problem..?
>
>
>
> [
> https://builds.apache.org/static/ea60962f/images/16x16/document_delete.png]
> Error Details
>
> unable to create new native thread
>
> [
> https://builds.apache.org/static/ea60962f/images/16x16/document_delete.png]
> Stack Trace
>
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
>
>
>
> Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>
>


[jira] [Created] (HDFS-8879) Quota by storage type usage incorrectly initialized upon namenode restart

2015-08-07 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8879:


 Summary: Quota by storage type usage incorrectly initialized upon 
namenode restart
 Key: HDFS-8879
 URL: https://issues.apache.org/jira/browse/HDFS-8879
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao


This was found by [~kihwal] as part of HDFS-8865 work in this 
[comment|https://issues.apache.org/jira/browse/HDFS-8865?focusedCommentId=14660904&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14660904].

The unit test 
testQuotaByStorageTypePersistenceInFsImage/testQuotaByStorageTypePersistenceInFsEdit
 failed to detect this because they were using an obsolete
FsDirectory instance. Once added the highlighted line below, the issue can be 
reproed.

{code}
>fsdir = cluster.getNamesystem().getFSDirectory();
INode testDirNodeAfterNNRestart = fsdir.getINode4Write(testDir.toString());
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8880) NameNode should periodically log metrics

2015-08-07 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-8880:
---

 Summary: NameNode should periodically log metrics
 Key: HDFS-8880
 URL: https://issues.apache.org/jira/browse/HDFS-8880
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The NameNode can periodically log metrics to help debugging when the cluster is 
not setup with another metrics monitoring scheme.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8877) Allow bypassing some minor exceptions while loading editlog

2015-08-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao resolved HDFS-8877.
-
Resolution: Not A Problem

> Allow bypassing some minor exceptions while loading editlog
> ---
>
> Key: HDFS-8877
> URL: https://issues.apache.org/jira/browse/HDFS-8877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>
> Usually when users hit editlog corruption like HDFS-7587, before upgrading to 
> a new version with the bug fix, a customized jar has to be provided by 
> developers first to bypass the exception while loading edits. The whole 
> process is usually painful.
> If we can confirm the corruption/exception is a known issue and can be 
> ignored after upgrading to the newer version, it may be helpful to have the 
> capability for users/developers to specify certain types/numbers of 
> exceptions that can be bypassed while loading edits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)