Hadoop-Hdfs-trunk - Build # 1149 - Still Failing

2012-08-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1149/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11185 lines...]
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.486 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.952 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.924 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.145 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.242 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.494 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.326 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.818 sec

Results :

Failed tests:   
testHdfsDelegationToken(org.apache.hadoop.hdfs.TestHftpDelegationToken): wrong 
tokens in user expected:<2> but was:<1>

Tests run: 1487, Failures: 1, Errors: 0, Skipped: 4

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:13:30.700s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:13:31.454s
[INFO] Finished at: Wed Aug 29 12:47:11 UTC 2012
[INFO] Final Memory: 17M/281M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4600
Updating HADOOP-8737
Updating HDFS-3849
Updating HADOOP-8738
Updating HDFS-3864
Updating HADOOP-8619
Updating HDFS-3860
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1149

2012-08-29 Thread Apache Jenkins Server
See 

Changes:

[eli] HADOOP-8737. cmake: always use JAVA_HOME to find libjvm.so, jni.h, 
jni_md.h. Contributed by Colin Patrick McCabe

[atm] HDFS-3864. NN does not update internal file mtime for OP_CLOSE when 
reading from the edit log. Contributed by Aaron T. Myers.

[atm] HDFS-3849. When re-loading the FSImage, we should clear the existing 
genStamp and leases. Contributed by Colin Patrick McCabe.

[bobby] MAPREDUCE-4600. TestTokenCache.java from MRV1 no longer compiles  
(daryn via bobby)

[atm] HDFS-3860. HeartbeatManager#Monitor may wrongly hold the writelock of 
namesystem. Contributed by Jing Zhao.

[tucu] HADOOP-8738. junit JAR is showing up in the distro (tucu)

[suresh] HADOOP-8619. WritableComparator must implement no-arg constructor. 
Contributed by Chris Douglas.

--
[...truncated 10992 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.36 sec
Running org.apache.hadoop.hdfs.TestFSOutputSummer
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.227 sec
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.149 sec
Running org.apache.hadoop.hdfs.TestHftpFileSystem
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.017 sec
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.042 sec
Running org.apache.hadoop.hdfs.TestFSInputChecker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.217 sec
Running org.apache.hadoop.hdfs.TestHftpDelegationToken
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.587 sec <<< 
FAILURE!
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.773 sec
Running org.apache.hadoop.hdfs.TestByteRangeInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.309 sec
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.059 sec
Running org.apache.hadoop.hdfs.TestRenameWhileOpen
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.138 sec
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.429 sec
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 147.966 sec
Running org.apache.hadoop.hdfs.TestLeaseRecovery
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.548 sec
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.012 sec
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.057 sec
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.057 sec
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.172 sec
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.481 sec
Running org.apache.hadoop.hdfs.TestDFSMkdirs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.514 sec
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.066 sec
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.193 sec
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.285 sec
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.072 sec
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.392 sec
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.428 sec
Running org.apache.hadoop.hdfs.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.933 sec
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.804 sec
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.186 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.469 sec
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.694 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.39 sec
Running org.apache.hadoop.hdfs.web.T

Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #358

2012-08-29 Thread Apache Jenkins Server
See 

Changes:

[bobby] MAPREDUCE-4600. TestTokenCache.java from MRV1 no longer compiles  
(daryn via bobby)

[suresh] HDFS-8619. Merging change from trunk to branch-2

--
[...truncated 19062 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 


Hadoop-Hdfs-0.23-Build - Build # 358 - Failure

2012-08-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/358/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 19255 lines...]
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-project ---
[INFO] Wrote classpath file 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/classes/mrapp-generated-classpath'.
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.3-SNAPSHOT/hadoop-hdfs-project-0.23.3-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-project ---
[INFO] Skipped writing classpath file 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/classes/mrapp-generated-classpath'.
  No changes found.
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:22.907s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [49.042s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.059s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:12.613s
[INFO] Finished at: Wed Aug 29 11:40:07 UTC 2012
[INFO] Final Memory: 53M/765M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Publishing Javadoc
Recording fingerprints
Error updating JIRA issues. Saving issues for next build.
com.atlassian.jira.rpc.exception.RemotePermissionException: This issue does not 
exist or you don't have permission to view it.
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestAppendDifferentChecksum.org.apache.hadoop.hdfs.TestAppendDifferentChecksum

Error Message:
Cannot lock storage 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1.
 The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1.
 The directory is already locked.
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:295)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:211)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:175)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:336)
   

[jira] [Created] (HDFS-3866) HttpFS build should download Tomcat via Maven instead of directly

2012-08-29 Thread Ryan Hennig (JIRA)
Ryan Hennig created HDFS-3866:
-

 Summary: HttpFS build should download Tomcat via Maven instead of 
directly
 Key: HDFS-3866
 URL: https://issues.apache.org/jira/browse/HDFS-3866
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: CDH4 build on CentOS 6.2
Reporter: Ryan Hennig
Priority: Minor


When trying to enable a build of CDH4 in Jenkins, I got a build error due to an 
attempt to download Tomcat from the internet directly instead of via Maven and 
thus our internal Maven repository.

The problem is due to this line in 
src/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/antrun/build-main.xml:
  http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.32/bin/apache-tomcat-6.0.32.tar.gz"/>

This build.xml is generated from 
src/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml:
http://archive.apache.org/dist/tomcat/tomcat-6/v${tomcat.version}/bin/apache-tomcat-${tomcat.version}.tar.gz";
  dest="downloads/tomcat.tar.gz" verbose="true" skipexisting="true"/>

Instead of directly downloading from a hardcoded location, the Tomcat 
dependency should be managed by Maven.  This would enable the use of a local 
repository for build machines without internet access.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3867) QJM: Support rolling restart of JNs

2012-08-29 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-3867:
-

 Summary: QJM: Support rolling restart of JNs
 Key: HDFS-3867
 URL: https://issues.apache.org/jira/browse/HDFS-3867
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: QuorumJournalManager (HDFS-3077)
Reporter: Todd Lipcon
Assignee: Todd Lipcon


In order to perform upgrades or other maintenance, it is useful to be able to 
perform a rolling restart of the journal nodes while the NameNode is active.

Currently, this does not work, because the NN only picks up restarted JNs again 
on the beginning of the next log segment. So, if the NN does not roll after 
each node is restarted in turn, the NN will eventually fail to commit to a 
quorum and crash.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3868) Add SHA1 implementation and add NIST test vectors

2012-08-29 Thread Andy Isaacson (JIRA)
Andy Isaacson created HDFS-3868:
---

 Summary: Add SHA1 implementation and add NIST test vectors
 Key: HDFS-3868
 URL: https://issues.apache.org/jira/browse/HDFS-3868
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
Priority: Trivial


In HDFS-3859 Todd noted that we don't have a SHA1 implementation.  While 
implementing SHA1 I noted that TestMD5Hash is missing the NIST test vectors (so 
we don't actually test that the function we're using is MD5, at all!).

This Jira implements SHA1 in the same fashion as MD5, and adds the NIST test 
vectors to both MD5 and SHA1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3869) QJM: expose non-file journal manager details in web UI

2012-08-29 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-3869:
-

 Summary: QJM: expose non-file journal manager details in web UI
 Key: HDFS-3869
 URL: https://issues.apache.org/jira/browse/HDFS-3869
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: QuorumJournalManager (HDFS-3077)
Reporter: Todd Lipcon
Assignee: Todd Lipcon


Currently, the NN web UI only contains NN storage directories on local disk. It 
should also include details about any non-file JournalManagers in use.

This JIRA targets the QJM branch, but will be useful for BKJM as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3870) QJM: add metrics to JournalNode

2012-08-29 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-3870:
-

 Summary: QJM: add metrics to JournalNode
 Key: HDFS-3870
 URL: https://issues.apache.org/jira/browse/HDFS-3870
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: QuorumJournalManager (HDFS-3077)
Reporter: Todd Lipcon
Assignee: Todd Lipcon


The JournalNode should expose some basic metrics through the usual interface. 
In particular:
- the writer epoch, accepted epoch,
- the last written transaction ID and last committed txid (which may be newer 
in case that it's in the process of catching up)
- latency information for how long the syncs are taking

Please feel free to suggest others that come to mind.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira