Hadoop-Hdfs-trunk - Build # 919 - Still Unstable

2012-01-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/919/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12173 lines...]
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-bkjournal ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is true
[INFO] ** FindBugsMojo executeFindbugs ***
[INFO] Temp File is 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/findbugsTemp.xml
[INFO] Fork Value is true
[INFO] xmlOutput is false
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS Project 0.24.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.7:jar (module-javadocs) @ hadoop-hdfs-project 
---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:08.349s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [29.910s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [10.838s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.039s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 5:49.575s
[INFO] Finished at: Sun Jan 08 11:40:01 UTC 2012
[INFO] Final Memory: 87M/736M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks.testMaxCorruptFiles

Error Message:
Cannot remove file.

Stack Trace:
java.lang.AssertionError: Cannot remove file.
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks.__CLR3_0_2mzyzlv1h6o(TestListCorruptFileBlocks.java:481)
at 
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks.testMaxCorruptFiles(TestListCorruptFileBlocks.java:442)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.B

Jenkins build is still unstable: Hadoop-Hdfs-trunk #919

2012-01-08 Thread Apache Jenkins Server
See 




Hadoop-Hdfs-0.23-Build - Build # 132 - Still Unstable

2012-01-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/132/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 14060 lines...]

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.7:jar (module-javadocs) @ hadoop-hdfs-project 
---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.1-SNAPSHOT/hadoop-hdfs-project-0.23.1-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.7:jar (module-javadocs) @ hadoop-hdfs-project 
---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:38.803s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [43.184s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.059s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:22.490s
[INFO] Finished at: Sun Jan 08 11:40:55 UTC 2012
[INFO] Final Memory: 76M/756M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Publishing Clover coverage report...
Publishing Clover HTML report...
Publishing Clover XML report...
Publishing Clover coverage results...
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.datanode.TestMulitipleNNDataBlockScanner.testBlockScannerAfterRestart

Error Message:
Block Pool: BP-1727151826-67.195.138.31-1326023068054 is not running

Stack Trace:
java.io.IOException: Block Pool: BP-1727151826-67.195.138.31-1326023068054 is 
not running
at 
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getBlocksScannedInLastRun(DataBlockScanner.java:282)
at 
org.apache.hadoop.hdfs.server.datanode.TestMulitipleNNDataBlockScanner.__CLR3_0_2yc9exv189x(TestMulitipleNNDataBlockScanner.java:152)
at 
org.apache.hadoop.hdfs.server.datanode.TestMulitipleNNDataBlockScanner.testBlockScannerAfterRestart(TestMulitipleNNDataBlockScanner.java:142)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.B

Jenkins build is still unstable: Hadoop-Hdfs-0.23-Build #132

2012-01-08 Thread Apache Jenkins Server
See 




[jira] [Created] (HDFS-2769) HA: When HA is enabled with a shared edits dir, that dir should be marked required

2012-01-08 Thread Aaron T. Myers (Created) (JIRA)
HA: When HA is enabled with a shared edits dir, that dir should be marked 
required
--

 Key: HDFS-2769
 URL: https://issues.apache.org/jira/browse/HDFS-2769
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: HA branch (HDFS-1623)
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


HDFS-2430 introduced the concept of "required" edits/name dirs, which are not 
treated as redundant when considering NN storage failure. When a shared edits 
dir is used, this should automatically be marked as required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2730) HA: Refactor shared HA-related test code into HATestUtils class

2012-01-08 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2730.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to branch, thx for the review.

> HA: Refactor shared HA-related test code into HATestUtils class
> ---
>
> Key: HDFS-2730
> URL: https://issues.apache.org/jira/browse/HDFS-2730
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, test
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: HA branch (HDFS-1623)
>
> Attachments: hdfs-2730.txt
>
>
> A fair number of the HA tests are sharing code like 
> {{waitForStandbyToCatchUp}}, etc. We should refactor this code into an 
> HATestUtils class with static methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2770) Block reports may mark corrupt blocks pending deletion as non-corrupt

2012-01-08 Thread Todd Lipcon (Created) (JIRA)
Block reports may mark corrupt blocks pending deletion as non-corrupt
-

 Key: HDFS-2770
 URL: https://issues.apache.org/jira/browse/HDFS-2770
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Priority: Critical


It seems like HDFS-900 may have regressed in trunk since it was committed 
without a regression test. In HDFS-2742 I saw the following sequence of events:
- A block at replication 2 had one of its replicas marked as corrupt on the NN
- NN scheduled deletion of that replica in {{invalidateWork}}, and removed it 
from the block map
- The DN hosting that block sent a block report, which caused the replica to 
get re-added to the block map as if it were good
- The deletion request was passed to the DN and it deleted the block
- Now we're in a bad state, where the NN temporarily thinks that it has two 
good replicas, but in fact one of them has been deleted. If we lower 
replication of this block at this time, the one good remaining replica may be 
deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2771) Move Federation and WebHDFS documentation into HDFS project

2012-01-08 Thread Todd Lipcon (Created) (JIRA)
Move Federation and WebHDFS documentation into HDFS project
---

 Key: HDFS-2771
 URL: https://issues.apache.org/jira/browse/HDFS-2771
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation
Affects Versions: 0.23.0
Reporter: Todd Lipcon


For some strange reason, the WebHDFS and Federation documentation is currently 
in the hadoop-yarn site. This is counter-intuitive. We should move these 
documents to an hdfs site, or if we think that all documentation should go on 
one site, it should go into the hadoop-common project somewhere.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira