Jenkins build is still unstable: Hadoop-Hdfs-0.23-Build #392

2012-10-02 Thread Apache Jenkins Server
See 



Hadoop-Hdfs-0.23-Build - Build # 392 - Still Unstable

2012-10-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/392/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 19629 lines...]
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.5-SNAPSHOT/hadoop-hdfs-project-0.23.5-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-project ---
[INFO] Skipped writing classpath file 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/classes/mrapp-generated-classpath'.
  No changes found.
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:34.970s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [47.421s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.059s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:23.059s
[INFO] Finished at: Tue Oct 02 11:40:51 UTC 2012
[INFO] Final Memory: 54M/741M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating HADOOP-8755
Updating YARN-138
Updating HDFS-3919
Updating HADOOP-8851
Updating HADOOP-8775
Updating YARN-116
Updating HADOOP-8789
Updating HADOOP-8819
Updating HADOOP-8386
Updating MAPREDUCE-4674
Updating YARN-28
Updating HADOOP-8310
Updating HADOOP-8791
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks.testMaxCorruptFiles

Error Message:
IPC server unable to read call parameters: readObject can't find class 
java.lang.String

Stack Trace:
java.lang.RuntimeException: IPC server unable to read call parameters: 
readObject can't find class java.lang.String
at org.apache.hadoop.ipc.Client.call(Client.java:1088)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:195)
at $Proxy12.complete(Unknown Source)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:102)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:67)
at $Proxy12.complete(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:1671)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:1658)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:66)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:99)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:200)
at org.apache.hadoop.hdfs.DFSTestUtil.createFiles(DFSTestUtil.java:170)
at 
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks.__CLR3_0_2mzyzlv1e3r(TestListCorruptFileBlocks.java:455)
at 

[jira] [Created] (HDFS-3996) Add debug log removed in HDFS-3873 back

2012-10-02 Thread Eli Collins (JIRA)
Eli Collins created HDFS-3996:
-

 Summary: Add debug log removed in HDFS-3873 back
 Key: HDFS-3996
 URL: https://issues.apache.org/jira/browse/HDFS-3996
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor


Per HDFS-3873 let's add the debug log back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3998) Speed up fsck

2012-10-02 Thread Ming Ma (JIRA)
Ming Ma created HDFS-3998:
-

 Summary: Speed up fsck
 Key: HDFS-3998
 URL: https://issues.apache.org/jira/browse/HDFS-3998
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Ming Ma


We have some big clusters. Sometimes we want to find out the list of missing 
blocks or blocks with only one replica quickly. Currently fsck has to take a 
path as input and it then recursively check for inconsistency. That could take 
a long time to find the missing blocks and the files the missing blocks belong 
to. It will be useful to speed this up. For example, it could go directly to 
missing blocks stored in NN and do the file lookup instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3999) HttpFS OPEN operation expects len parameter, it should be length

2012-10-02 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HDFS-3999:


 Summary: HttpFS OPEN operation expects len parameter, it should be 
length
 Key: HDFS-3999
 URL: https://issues.apache.org/jira/browse/HDFS-3999
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha


WebHDFS API defines *length* as the parameter for partial length for OPEN 
operations, HttpFS is using *len*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4000) TestParallelLocalRead fails with "input ByteBuffers must be direct buffers"

2012-10-02 Thread Eli Collins (JIRA)
Eli Collins created HDFS-4000:
-

 Summary: TestParallelLocalRead fails with "input ByteBuffers must 
be direct buffers"
 Key: HDFS-4000
 URL: https://issues.apache.org/jira/browse/HDFS-4000
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe


I think this may be related to HDFS-3753, passes when I revert it. Here's 
failure. Looks like it needs the same fix as TestShortCircuitLocalRead, not 
sure why that didn't show up in the jenkins run.

{noformat}
java.lang.AssertionError: Check log for errors
at org.junit.Assert.fail(Assert.java:91)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil.runTestWorkload(TestParallelReadUtil.java:373)
at 
org.apache.hadoop.hdfs.TestParallelLocalRead.testParallelReadByteBuffer(TestParallelLocalRead.java:61)
{noformat}

{noformat}
2012-10-02 15:39:49,481 ERROR hdfs.TestParallelReadUtil 
(TestParallelReadUtil.java:run(227)) - ReadWorker-1-/TestParallelRead.dat.0: 
Error while testing read at 199510 length 14773
java.lang.IllegalArgumentException: input ByteBuffers must be direct buffers
at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
at 
org.apache.hadoop.hdfs.BlockReaderLocal.doByteBufferRead(BlockReaderLocal.java:501)
at 
org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:409)
at 
org.apache.hadoop.hdfs.DFSInputStream$ByteBufferStrategy.doRead(DFSInputStream.java:561)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:594)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:696)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.read(TestParallelReadUtil.java:91)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.pRead(TestParallelReadUtil.java:104)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.pRead(TestParallelReadUtil.java:275)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.run(TestParallelReadUtil.java:223)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4001) TestSafeMode#testInitializeReplQueuesEarly may time out

2012-10-02 Thread Eli Collins (JIRA)
Eli Collins created HDFS-4001:
-

 Summary: TestSafeMode#testInitializeReplQueuesEarly may time out
 Key: HDFS-4001
 URL: https://issues.apache.org/jira/browse/HDFS-4001
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins


Saw this failure on a recent branch-2 jenkins run, has also been seen on trunk.

{noformat}
java.util.concurrent.TimeoutException: Timed out waiting for condition
at 
org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:107)
at 
org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:191)
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira