Hadoop-Hdfs-trunk - Build # 1903 - Still Failing

2014-10-16 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1903/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6193 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:23 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  2.220 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:23 h
[INFO] Finished at: 2014-10-16T13:58:05+00:00
[INFO] Final Memory: 62M/864M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7208
Updating YARN-2312
Updating HDFS-7185
Updating HDFS-5089
Updating YARN-2496
Updating MAPREDUCE-5970
Updating MAPREDUCE-5873
Updating HADOOP-11181
Updating YARN-2685
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path 
/test-12
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
at 
org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
  

Build failed in Jenkins: Hadoop-Hdfs-trunk #1903

2014-10-16 Thread Apache Jenkins Server
See 

Changes:

[jlowe] MAPREDUCE-5873. Shuffle bandwidth computation includes time spent 
waiting for maps. Contributed by Siqi Li

[jing9] HDFS-7185. The active NameNode will not accept an fsimage sent from the 
standby during rolling upgrade. Contributed by Jing Zhao.

[jlowe] MAPREDUCE-5970. Provide a boolean switch to enable MR-AM profiling. 
Contributed by Gera Shegalov

[jianhe] YARN-2312. Deprecated old ContainerId#getId API and updated MapReduce 
to use ContainerId#getContainerId instead. Contributed by Tsuyoshi OZAWA

[raviprak] HADOOP-11181. GraphiteSink emits wrong timestamps (Sascha Coenen via 
raviprak)

[vinodkv] YARN-2496. Enhanced Capacity Scheduler to have basic support for 
allocating resources based on node-labels. Contributed by Wangda Tan.

[vinodkv] YARN-2685. Fixed a bug in CommonNodeLabelsManager that caused wrong 
resource tracking per label when a host runs multiple node-managers. 
Contributed by Wangda Tan.

[szetszwo] HDFS-7208. NN doesn't schedule replication when a DN storage fails.  
Contributed by Ming Ma

[szetszwo] HDFS-5089. When a LayoutVersion support SNAPSHOT, it must support 
FSIMAGE_NAME_OPTIMIZATION.

--
[...truncated 6000 lines...]
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.632 sec - 
in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.081 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.952 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.116 sec - 
in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.652 sec - in 
org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.524 sec - in 
org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.615 sec - in 
org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.036 sec - in 
org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.41 sec - in 
org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.957 sec - in 
org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.028 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.133 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.645 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.812 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.52 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.648 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.379 sec - 
in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.701 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache

[jira] [Created] (HDFS-7255) Customize Java Heap min/max settings for individual processes

2014-10-16 Thread Mark Tse (JIRA)
Mark Tse created HDFS-7255:
--

 Summary: Customize Java Heap min/max settings for individual 
processes
 Key: HDFS-7255
 URL: https://issues.apache.org/jira/browse/HDFS-7255
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, journal-node, namenode
Affects Versions: 2.5.1, 2.4.1
Reporter: Mark Tse


The NameNode and JournalNode (and ZKFC) can all run on the same machine. 
However, they get their heap settings from HADOOP_HEAPSIZE/JAVA_HEAP_MAX. There 
are scenarios where the NameNode process should have different Java memory 
requirements than the JournalNode and ZKFC (e.g. if the machine has 10 GB of 
RAM, and I want the NameNode process to have 8 GB max). 

HADOOP_(.*)_OPTS variables exist for these processes and can be used to add the 
Xms and Xmx tags, but because of how the default for JAVA_HEAP_MAX is set, it 
will always add '-Xmx1000m' to the final call to start up the 
NameNode/JournalNode/ZKFC process, resulting in two different Java heap 
settings (e.g. -Xmx1000m and -Xmx8g is used when starting the NameNode).

Note: HADOOP_HEAPSIZE is deprecated according to [HADOOP-10950]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7256) Encryption Key created in Java Key Store after Namenode start unavailable for EZ Creation

2014-10-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-7256:


 Summary: Encryption Key created in Java Key Store after Namenode 
start unavailable for EZ Creation 
 Key: HDFS-7256
 URL: https://issues.apache.org/jira/browse/HDFS-7256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption, security
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao


Hit an error on "RemoteException: Key ezkey1 doesn't exist." when creating EZ 
with a Key created after NN starts.

Briefly check the code and found that the KeyProivder is loaded by FSN only at 
the NN start. My work around is to restart the NN which triggers the reload of 
Key Provider. Is this expected?

Repro Steps:

Create a new Key after NN and KMS starts
hadoop/bin/hadoop key create ezkey1 -size 256 -provider 
jceks://file/home/hadoop/kms.keystore

List Keys
hadoop@SaturnVm:~/deploy$ hadoop/bin/hadoop key list -provider 
jceks://file/home/hadoop/kms.keystore -metadata
Listing keys for KeyProvider: jceks://file/home/hadoop/kms.keystore
ezkey1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created: 
Thu Oct 16 18:51:30 EDT 2014, version: 1, attributes: null
key2 : cipher: AES/CTR/NoPadding, length: 128, description: null, created: Tue 
Oct 14 19:44:09 EDT 2014, version: 1, attributes: null
key1 : cipher: AES/CTR/NoPadding, length: 128, description: null, created: Tue 
Oct 14 17:52:36 EDT 2014, version: 1, attributes: null

Create Encryption Zone
hadoop/bin/hdfs dfs -mkdir /Ez1
hadoop@SaturnVm:~/deploy$ hadoop/bin/hdfs crypto -createZone -keyName ezkey1 
-path /Ez1
RemoteException: Key ezkey1 doesn't exist.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7256) Encryption Key created in Java Key Store after Namenode start unavailable for EZ Creation

2014-10-16 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu resolved HDFS-7256.
--
Resolution: Not a Problem

I mark it as "Not a Problem", please feel free to reopen it if you have 
different  opinions.

> Encryption Key created in Java Key Store after Namenode start unavailable for 
> EZ Creation 
> --
>
> Key: HDFS-7256
> URL: https://issues.apache.org/jira/browse/HDFS-7256
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, security
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>
> Hit an error on "RemoteException: Key ezkey1 doesn't exist." when creating EZ 
> with a Key created after NN starts.
> Briefly check the code and found that the KeyProivder is loaded by FSN only 
> at the NN start. My work around is to restart the NN which triggers the 
> reload of Key Provider. Is this expected?
> Repro Steps:
> Create a new Key after NN and KMS starts
> hadoop/bin/hadoop key create ezkey1 -size 256 -provider 
> jceks://file/home/hadoop/kms.keystore
> List Keys
> hadoop@SaturnVm:~/deploy$ hadoop/bin/hadoop key list -provider 
> jceks://file/home/hadoop/kms.keystore -metadata
> Listing keys for KeyProvider: jceks://file/home/hadoop/kms.keystore
> ezkey1 : cipher: AES/CTR/NoPadding, length: 256, description: null, created: 
> Thu Oct 16 18:51:30 EDT 2014, version: 1, attributes: null
> key2 : cipher: AES/CTR/NoPadding, length: 128, description: null, created: 
> Tue Oct 14 19:44:09 EDT 2014, version: 1, attributes: null
> key1 : cipher: AES/CTR/NoPadding, length: 128, description: null, created: 
> Tue Oct 14 17:52:36 EDT 2014, version: 1, attributes: null
> Create Encryption Zone
> hadoop/bin/hdfs dfs -mkdir /Ez1
> hadoop@SaturnVm:~/deploy$ hadoop/bin/hdfs crypto -createZone -keyName ezkey1 
> -path /Ez1
> RemoteException: Key ezkey1 doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)