Hadoop-Hdfs-trunk - Build # 914 - Unstable

2012-01-03 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/914/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12062 lines...]
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-bkjournal ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is true
[INFO] ** FindBugsMojo executeFindbugs ***
[INFO] Temp File is 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/findbugsTemp.xml
[INFO] Fork Value is true
[INFO] xmlOutput is false
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS Project 0.24.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.7:jar (module-javadocs) @ hadoop-hdfs-project 
---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [4:59.968s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [30.978s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [10.733s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.033s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 5:42.135s
[INFO] Finished at: Tue Jan 03 11:40:04 UTC 2012
[INFO] Final Memory: 85M/736M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode

Error Message:
Port in use: 0.0.0.0:50105

Stack Trace:
junit.framework.AssertionFailedError: Port in use: 0.0.0.0:50105
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.assertTrue(Assert.java:20)
at 
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:280)
at 
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.__CLR3_0_2odeun71e4j(TestBackupNode.java:113)
at 
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite

Jenkins build became unstable: Hadoop-Hdfs-trunk #914

2012-01-03 Thread Apache Jenkins Server
See 




[jira] [Created] (HDFS-2743) Streamline usage of bookkeeper journal manager

2012-01-03 Thread Ivan Kelly (Created) (JIRA)
Streamline usage of bookkeeper journal manager
--

 Key: HDFS-2743
 URL: https://issues.apache.org/jira/browse/HDFS-2743
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 0.24.0


The current method of installing bkjournal manager involves generating a 
tarball, and extracting it with special flags over the hdfs distribution. This 
is cumbersome and prone to being broken by other changes (see 
https://svn.apache.org/repos/asf/hadoop/common/trunk@1220940). I think a 
cleaner way to doing this is to generate a single jar that can be placed in the 
lib dir of hdfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2744) Extend FSDataInputStream to allow fadvise

2012-01-03 Thread dhruba borthakur (Created) (JIRA)
Extend FSDataInputStream to allow fadvise
-

 Key: HDFS-2744
 URL: https://issues.apache.org/jira/browse/HDFS-2744
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Reporter: dhruba borthakur
Assignee: dhruba borthakur


Now that we have direct reads from local HDFS block files (HDFS-2246), it might 
make sense to make FSDataInputStream support fadvise calls. I have an 
application (HBase) that would like to tell the OS that it should not buffer 
data in the OS buffer cache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2745) unclear to users which command to use to access the filesystem

2012-01-03 Thread Thomas Graves (Created) (JIRA)
unclear to users which command to use to access the filesystem
--

 Key: HDFS-2745
 URL: https://issues.apache.org/jira/browse/HDFS-2745
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Thomas Graves
Priority: Critical


Its unclear to users which command to use to access the filesystem. Need some 
background and then we can fix accordingly. We have 3 choices:

hadoop dfs -> says its deprecated and to use hdfs.  If I run hdfs usage it 
doesn't list any options like -ls in the usage, although there is an hdfs dfs 
command

hdfs dfs -> not in the usage of hdfs. If we recommend it when running hadoop 
dfs it should atleast be in the usage.

hadoop fs -> seems like one to use it appears generic for any filesystem.

Any input on this what is the recommended way to do this?  Based on that we can 
fix up the other issues. 



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2746) Add a parameter to `hdfs dfsadmin -saveNamespace' to allow an arbitrary storage dir

2012-01-03 Thread Aaron T. Myers (Created) (JIRA)
Add a parameter to `hdfs dfsadmin -saveNamespace' to allow an arbitrary storage 
dir
---

 Key: HDFS-2746
 URL: https://issues.apache.org/jira/browse/HDFS-2746
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers


Presently `hdfs dfsadmin -saveNamespace' will attempt to save a snapshot of the 
file system metadata state into all configured storage dirs. It would be nice 
if we could optionally specify an arbitrary local dir to save the FS state 
into, instead of the configured storage dirs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2747) HA: entering SM after starting SBN can NPE

2012-01-03 Thread Eli Collins (Created) (JIRA)
HA: entering SM after starting SBN can NPE
--

 Key: HDFS-2747
 URL: https://issues.apache.org/jira/browse/HDFS-2747
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HA branch (HDFS-1623)
Reporter: Eli Collins


Entering SM on the primary after while it's already in SM after the SBN is 
started results in an NPE: 

{noformat}
hadoop-0.24.0-SNAPSHOT $ ./bin/hdfs dfsadmin -safemode get
Safe mode is ON
hadoop-0.24.0-SNAPSHOT $ ./bin/hdfs dfsadmin -safemode enter
safemode: java.lang.NullPointerException
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira