Hadoop-Hdfs-trunk - Build # 1466 - Still Failing

2013-07-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1466/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 15228 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:38:27.292s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [2:21.618s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [58.321s]
[INFO] Apache Hadoop HDFS-NFS  FAILURE [25.697s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.033s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:42:13.807s
[INFO] Finished at: Sat Jul 20 13:16:33 UTC 2013
[INFO] Final Memory: 57M/883M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.6:checkstyle (default-cli) 
on project hadoop-hdfs-nfs: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to find configuration 
file at location 
file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml:
 Could not find resource 
'file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml'.
 -> [Help 2]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] [Help 2] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Error updating JIRA issues. Saving issues for next build.
com.atlassian.jira.rpc.exception.RemoteAuthenticationException: Attempt to log 
in user 'hudson' failed. The maximum number of failed login attempts has been 
reached. Please log into the application through the web interface to reset the 
number of failed login attempts.
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1466

2013-07-20 Thread Apache Jenkins Server
See 

Changes:

[szetszwo] HADOOP-9751. Add clientId and retryCount to RpcResponseHeaderProto.

[hitesh] YARN-919. Document setting default heap sizes in yarn env.sh 
Contributed by Mayank Bansal.

[tucu] ADOOP-9643. org.apache.hadoop.security.SecurityUtil calls 
toUpperCase(Locale.getDefault()) as well as toLowerCase(Locale.getDefault()) on 
hadoop.security.authentication value. (markrmil...@gmail.com via tucu)

[daryn] HADOOP-9748. Reduce blocking on UGI.ensureInitialized (daryn)

--
[...truncated 15035 lines...]
java.lang.AssertionError: SBN should have still been checkpointing.
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testStandbyExceptionThrownDuringCheckpoint(TestStandbyCheckpoints.java:279)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)

Running org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.694 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.101 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.463 sec

Results :

Failed tests:   
testStandbyExceptionThrownDuringCheckpoint(org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints):
 SBN should have still been checkpointing.

Tests run: 32, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 7 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ hadoop-hdfs-nfs 
---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.nfs.nfs3.TestOffsetRange
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 sec
Running org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 sec
Running org.apache.hadoop.hdfs.nfs.nfs3.TestDFSClientCache
Tests run: 1, Failures: 0

[jira] [Created] (HDFS-5014) BPOfferService#processCommandFromActor() synchronization on namenode RPC call delays IBR to Active NN, if Stanby NN is unstable

2013-07-20 Thread Vinay (JIRA)
Vinay created HDFS-5014:
---

 Summary: BPOfferService#processCommandFromActor() synchronization 
on namenode RPC call delays IBR to Active NN, if Stanby NN is unstable
 Key: HDFS-5014
 URL: https://issues.apache.org/jira/browse/HDFS-5014
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, ha
Affects Versions: 2.0.4-alpha
Reporter: Vinay
Assignee: Vinay


In one of our cluster, following has happened which failed HDFS write.

1. Standby NN was unstable and continously restarting due to some errors. But 
Active NN was stable.
2. MR Job was writing files.
3. At some point SNN went down again while datanode processing the REGISTER 
command for SNN. 
4. Datanodes started retrying to connect to SNN to register at the following 
code  in BPServiceActor#retrieveNamespaceInfo() which will be called under 
synchronization.
{code}  try {
nsInfo = bpNamenode.versionRequest();
LOG.debug(this + " received versionRequest response: " + nsInfo);
break;{code}
Unfortunately in all datanodes at same point this happened.
5. For next 7-8 min standby was down, and no blocks were reported to active NN 
at this point and writes have failed.


So culprit is {{BPOfferService#processCommandFromActor()}} is completely 
synchronized which is not required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5015) Error while running custom writable code - DFSClient_NONMAPREDUCE_549327626_1 doesnot have any open file

2013-07-20 Thread Allan Gonsalves (JIRA)
Allan Gonsalves created HDFS-5015:
-

 Summary: Error while running custom writable code  - 
DFSClient_NONMAPREDUCE_549327626_1 doesnot have any open file
 Key: HDFS-5015
 URL: https://issues.apache.org/jira/browse/HDFS-5015
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
 Environment: Cloudera 4.3
Reporter: Allan Gonsalves


hi , 
We are facing an below error while running custom writable code, our driver 
code is using below code . logwritable is the name of custom writable interface.

--Error message at the code execution is as follows
DFSClient_NONMAPREDUCE_549327626_1 doesnot have any open file

--Driver code - 
job.setMapOutputKeyClass(logwritable.class);
job.setMapOutputValueClass(logwritable.class);
job.setNumReducerTasks(0);

--logwritable class implements Writable interface
--Definition of mapper is as follows

public class MAPPER extends Mapper{ 

contaxt.write(new logwritable(), new logwritable());
}

Any help to resolve above issue will be great help
Thanks
Allan 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira