Build failed in Jenkins: Hadoop-Hdfs-trunk #1859

2014-09-02 Thread Apache Jenkins Server
See 

--
[...truncated 5264 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.463 sec - in 
org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.183 sec - in 
org.apache.hadoop.hdfs.TestSeekBug
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.032 sec - in 
org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.498 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.76 sec - in 
org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.185 sec - in 
org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.198 sec - in 
org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestModTime
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.238 sec - in 
org.apache.hadoop.hdfs.TestModTime
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.582 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.774 sec - 
in org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitShm
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.47 sec - in 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitShm
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.947 sec - in 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.337 sec - in 
org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.hdfs.TestLeaseRecovery
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.697 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.058 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.hdfs.TestDefaultNameNodePort
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.787 sec - in 
org.apache.hadoop.hdfs.TestDefaultNameNodePort
Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.745 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Running org.apache.hadoop.hdfs.TestClose
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.412 sec - in 
org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestDFSMkdirs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.781 sec - in 
org.apache.hadoop.hdfs.TestDFSMkdirs
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.441 sec - in 
org.apache.hadoop.hdfs.TestFileCreationEmpty
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.652 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.371 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestDFSFinalize
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.613 sec - in 
org.apache.hadoop.hdfs.TestDFSFinalize
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.817 sec - 
in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.282 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZones
Running org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.437 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.124 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Te

Hadoop-Hdfs-trunk - Build # 1859 - Still Failing

2014-09-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1859/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5457 lines...]
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:10 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  2.190 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:10 h
[INFO] Finished at: 2014-09-02T13:44:31+00:00
[INFO] Final Memory: 64M/865M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: ExecutionException; nested exception is 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without saying properly goodbye. VM crash or System.exit called ?
[ERROR] Command was/bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.6.0_45-64/jre/bin/java -Xmx1024m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3942908368112243678.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire6690115744345655849tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2915154092838545109393tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

[jira] [Created] (HDFS-6982) nntop: topĀ­-like tool for name node users

2014-09-02 Thread Maysam Yabandeh (JIRA)
Maysam Yabandeh created HDFS-6982:
-

 Summary: nntop: topĀ­-like tool for name node users
 Key: HDFS-6982
 URL: https://issues.apache.org/jira/browse/HDFS-6982
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Maysam Yabandeh


In this jira we motivate the need for nntop, a tool that, similarly to what top 
does in Linux, gives the list of top users of the HDFS name node and gives 
insight about which users are sending majority of each traffic type to the name 
node. This information turns out to be the most critical when the name node is 
under pressure and the HDFS admin needs to know which user is hammering the 
name node and with what kind of requests. Here we present the design of nntop 
which has been in production at Twitter in the past 10 months. nntop proved to 
have low cpu overhead (< 2% in a cluster of 4K nodes), low memory footprint 
(less than a few MB), and quite efficient for the write path (only two hash 
lookup for updating a metric).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6983) TestBalancer#testExitZeroOnSuccess fails intermittently

2014-09-02 Thread Mit Desai (JIRA)
Mit Desai created HDFS-6983:
---

 Summary: TestBalancer#testExitZeroOnSuccess fails intermittently
 Key: HDFS-6983
 URL: https://issues.apache.org/jira/browse/HDFS-6983
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Mit Desai


TestBalancer#testExitZeroOnSuccess fails intermittently on branch-2. And 
probably fails on trunk too.

The test fails 1 in 20 times when I ran it in a loop. Here is the how it fails.

{noformat}
org.apache.hadoop.hdfs.server.balancer.TestBalancer
testExitZeroOnSuccess(org.apache.hadoop.hdfs.server.balancer.TestBalancer)  
Time elapsed: 53.965 sec  <<< ERROR!
java.util.concurrent.TimeoutException: Rebalancing expected avg utilization to 
become 0.2, but on datanode 127.0.0.1:35502 it remains at 0.08 after more than 
4 msec.
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.waitForBalancer(TestBalancer.java:321)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancerCli(TestBalancer.java:632)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:549)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:437)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:645)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testExitZeroOnSuccess(TestBalancer.java:845)


Results :

Tests in error: 
  
TestBalancer.testExitZeroOnSuccess:845->oneNodeTest:645->doTest:437->doTest:549->runBalancerCli:632->waitForBalancer:321
 Timeout
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6984) In Hadoop 3, make FileStatus no longer a Writable

2014-09-02 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-6984:
--

 Summary: In Hadoop 3, make FileStatus no longer a Writable
 Key: HDFS-6984
 URL: https://issues.apache.org/jira/browse/HDFS-6984
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


FileStatus was a Writable in Hadoop 2 and earlier.  Originally, we used this to 
serialize it and send it over the wire.  But in Hadoop 2 and later, we have the 
protobuf {{HdfsFileStatusProto}} which serves to serialize this information.  
The protobuf form is preferable, since it allows us to add new fields in a 
backwards-compatible way.  Another issue is that already a lot of subclasses of 
FileStatus don't override the Writable methods of the superclass, breaking the 
interface contract that read(status.write) should be equal to the original 
status.

In Hadoop 3, we should just make FileStatus no longer a writable so that we 
don't have to deal with these issues.  It's probably too late to do this in 
Hadoop 2, since user code may be relying on the ability to use the Writable 
methods on FileStatus objects there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6985) Add final keywords, documentation, etc. to ReplaceDatanodeOnFailure

2014-09-02 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-6985:
--

 Summary: Add final keywords, documentation, etc. to 
ReplaceDatanodeOnFailure
 Key: HDFS-6985
 URL: https://issues.apache.org/jira/browse/HDFS-6985
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


* use final qualifier consistently in {{ReplaceDatanodeOnFailure#Condition 
classes}}

* add a debug log message in the DFSClient explaining which pipeline failure 
policy is being used.

* add JavaDoc for ReplaceDatanodeOnFailure

* documentation dfs.client.block.write.replace-datanode-on-failure.best-effort 
should make it clear that that the configuration key refers to pipeline 
recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6986) DistributedFileSystem must get deletagiontokens from configured KeyProvider

2014-09-02 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HDFS-6986:


 Summary: DistributedFileSystem must get deletagiontokens from 
configured KeyProvider
 Key: HDFS-6986
 URL: https://issues.apache.org/jira/browse/HDFS-6986
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur


{{KeyProvider}} via {{KeyProviderDelegationTokenExtension}} provides delegation 
tokens. {{DistributedFileSystem}} should augment the HDFS delegation tokens 
with the keyprovider ones so tasks can interact with keyprovider when it is a 
client/server impl (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6987) Move CipherSuite xattr information up to the encryption zone root

2014-09-02 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6987:
-

 Summary: Move CipherSuite xattr information up to the encryption 
zone root
 Key: HDFS-6987
 URL: https://issues.apache.org/jira/browse/HDFS-6987
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: encryption
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Zhe Zhang


All files within a single EZ need to be encrypted with the same CipherSuite. 
Because of this, I think we can store the CipherSuite once in the EZ rather 
than on each file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6988) Make RAM disk eviction thresholds configurable

2014-09-02 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-6988:
---

 Summary: Make RAM disk eviction thresholds configurable
 Key: HDFS-6988
 URL: https://issues.apache.org/jira/browse/HDFS-6988
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal


Per feedback from [~cmccabe] on HDFS-6930, we can make the eviction thresholds 
configurable. The hard-coded thresholds may not be appropriate for very large 
RAM disks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6989) Add multi-node test support for unit test

2014-09-02 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-6989:


 Summary: Add multi-node test support for unit test
 Key: HDFS-6989
 URL: https://issues.apache.org/jira/browse/HDFS-6989
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Currently, the MiniDFSCluster/DFSClient assumes all DFSClients have the same 
local socket address. For Memory as Storage type unit test, we want to be able 
start both local and remote DFSClients and verify certain operations are 
allowed for only local clients bug not remote clients. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6990) Add unit test for evict/delete RAM_DISK block with open handle

2014-09-02 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-6990:


 Summary: Add unit test for evict/delete RAM_DISK block with open 
handle
 Key: HDFS-6990
 URL: https://issues.apache.org/jira/browse/HDFS-6990
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This is to verify:
* Evict RAM_DISK block with open handle should fall back to DISK.
* Delete RAM_DISK block (persisted) with open handle should mark the block to 
be deleted upon handle close. 

Simply open handle to file in DFS name space won't work as expected. We need a 
local FS file handle to the block file. The only meaningful case is for Short 
Circuit Read. This JIRA is to validate/enable the two cases with SCR enabled 
MiniDFSCluster on non-Mac OS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6991) Notify NN of evicted block before deleting it from RAM disk

2014-09-02 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-6991:
---

 Summary: Notify NN of evicted block before deleting it from RAM 
disk
 Key: HDFS-6991
 URL: https://issues.apache.org/jira/browse/HDFS-6991
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


When evicting a block from RAM disk to persistent storage, the DN should notify 
the NN of the persistent replica before deleting the replica from RAM disk. 
Else there can be a window of time during which the block is considered 
'missing' by the NN.

Found by [~xyao] via HDFS-6950.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)