Hadoop-Hdfs-22-branch - Build # 128 - Failure

2012-04-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/128/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 1036 lines...]
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/paranamer/paranamer-ant/2.3/paranamer-ant-2.3.pom
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/paranamer/paranamer-ant/2.3/paranamer-ant-2.3.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/paranamer/paranamer-ant/2.3/paranamer-ant-2.3-sources.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/paranamer/paranamer-ant/2.3/paranamer-ant-2.3-javadoc.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/paranamer/paranamer-generator/2.3/paranamer-generator-2.3.pom
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/paranamer/paranamer-generator/2.3/paranamer-generator-2.3.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/paranamer/paranamer-generator/2.3/paranamer-generator-2.3-sources.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/paranamer/paranamer-generator/2.3/paranamer-generator-2.3-javadoc.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/qdox/qdox/1.12/qdox-1.12.pom
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/qdox/qdox/1.12/qdox-1.12.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/qdox/qdox/1.12/qdox-1.12-sources.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/com/thoughtworks/qdox/qdox/1.12/qdox-1.12-javadoc.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/asm/asm/3.3/asm-3.3.pom
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/asm/asm/3.3/asm-3.3.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/asm/asm-parent/3.3/asm-parent-3.3.pom
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/asm/asm-parent/3.3/asm-parent-3.3.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/asm/asm/3.3/asm-3.3-sources.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/asm/asm/3.3/asm-3.3-javadoc.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/org/apache/ant/ant/1.7.1/ant-1.7.1.pom
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/org/apache/ant/ant/1.7.1/ant-1.7.1.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/org/apache/ant/ant-parent/1.7.1/ant-parent-1.7.1.pom
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/org/apache/ant/ant-parent/1.7.1/ant-parent-1.7.1.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/org/apache/ant/ant/1.7.1/ant-1.7.1-sources.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/org/apache/ant/ant/1.7.1/ant-1.7.1-javadoc.jar
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/org/apache/ant/ant-launcher/1.7.1/ant-launcher-1.7.1.pom
[ivy:resolve]   SERVER ERROR: Service Temporarily Unavailable 
url=https://repository.apache.org/content/repositories/snapshots/org/apache/ant/ant-launcher/1.7.1/ant-launcher-1.7.1.jar
[ivy:resolve]  

Jenkins build is back to stable : Hadoop-Hdfs-0.23-Build #216

2012-04-02 Thread Apache Jenkins Server
See 



Hadoop-Hdfs-trunk - Build # 1003 - Still Failing

2012-04-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1003/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12642 lines...]
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] 
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, 
no dependency information available
[WARNING] Failed to retrieve plugin descriptor for 
org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin 
org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be 
resolved: Failed to read artifact descriptor for 
org.eclipse.m2e:lifecycle-mapping:jar:1.0.0
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:21.949s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [39.472s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [11.082s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.033s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:13.293s
[INFO] Finished at: Mon Apr 02 12:05:20 UTC 2012
[INFO] Final Memory: 85M/737M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2995
Updating HDFS-3167
Updating HDFS-3174
Updating HADOOP-8236
Updating HADOOP-8199
Updating HADOOP-8238
Updating HDFS-3144
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #1003

2012-04-02 Thread Apache Jenkins Server
See 

Changes:

[todd] HADOOP-8236. haadmin should have configurable timeouts for failover 
commands. Contributed by Todd Lipcon.

[eli] HDFS-3174. Fix assert in TestPendingDataNodeMessages. Contributed by Eli 
Collins

[eli] Fixup CHANGES.txt for HDFS-3144.

[eli] HDFS-3144. Refactor DatanodeID#getName by use. Contributed by Eli Collins

[eli] HADOOP-8238. NetUtils#getHostNameOfIP blows up if given ip:port string 
w/o port. Contributed by Eli Collins

[umamahesh] HADOOP-8199. Fix issues in start-all.sh and stop-all.sh Contributed 
by Devaraj K.

[eli] HDFS-2995. start-dfs.sh should only start the 2NN for namenodes with 
dfs.namenode.secondary.http-address configured. Contributed by Eli Collins

[atm] HDFS-3167. CLI-based driver for MiniDFSCluster. Contributed by Henry 
Robinson.

--
[...truncated 12449 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 


[jira] [Created] (HDFS-3176) JsonUtil should not parse the MD5MD5CRC32FileChecksum bytes on its own.

2012-04-02 Thread Kihwal Lee (Created) (JIRA)
JsonUtil should not parse the MD5MD5CRC32FileChecksum bytes on its own.
---

 Key: HDFS-3176
 URL: https://issues.apache.org/jira/browse/HDFS-3176
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 1.0.1, 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 1.1.0, 0.23.3, 2.0.0, 3.0.0


Currently JsonUtil used by webhdfs parses MD5MD5CRC32FileChecksum binary bytes 
on its own and contructs a MD5MD5CRC32FileChecksum. It should instead call 
MD5MD5CRC32FileChecksum.readFields().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3177) Allow DFSClient to find out and use the CRC type being used for a file.

2012-04-02 Thread Kihwal Lee (Created) (JIRA)
Allow DFSClient to find out and use the CRC type being used for a file.
---

 Key: HDFS-3177
 URL: https://issues.apache.org/jira/browse/HDFS-3177
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, hdfs client
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.0.0, 3.0.0


To support HADOOP-8060, DFSClient should be able to find out the checksum type 
being used for files in hdfs.
In my prototype, DataTransferProtocol was extended to include the checksum type 
in the blockChecksum() response. DFSClient uses it in getFileChecksum() to 
determin the checksum type. Also append() can be configured to use the existing 
checksum type instead of the configured one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3178) Add states for journal synchronization in journal daemon

2012-04-02 Thread Tsz Wo (Nicholas), SZE (Created) (JIRA)
Add states for journal synchronization in journal daemon


 Key: HDFS-3178
 URL: https://issues.apache.org/jira/browse/HDFS-3178
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-3150) Add option for clients to contact DNs via hostname in branch-1

2012-04-02 Thread Eli Collins (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HDFS-3150.
---

  Resolution: Fixed
   Fix Version/s: 1.1.0
Target Version/s:   (was: 1.1.0)
Hadoop Flags: Reviewed

I've committed this.

> Add option for clients to contact DNs via hostname in branch-1
> --
>
> Key: HDFS-3150
> URL: https://issues.apache.org/jira/browse/HDFS-3150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node, hdfs client
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hdfs-3150-b1.txt, hdfs-3150-b1.txt
>
>
> Per the document attached to HADOOP-8198, this is just for branch-1, and 
> unbreaks DN multihoming. The datanode can be configured to listen on a bond, 
> or all interfaces by specifying the wildcard in the dfs.datanode.*.address 
> configuration options, however per HADOOP-6867 only the source address of the 
> registration is exposed to clients. HADOOP-985 made clients access datanodes 
> by IP primarily to avoid the latency of a DNS lookup, this had the side 
> effect of breaking DN multihoming. In order to fix it let's add back the 
> option for Datanodes to be accessed by hostname. This can be done by:
> # Modifying the primary field of the Datanode descriptor to be the hostname, 
> or 
> # Modifying Client/Datanode <-> Datanode access use the hostname field 
> instead of the IP
> I'd like to go with approach #2 as it does not require making an incompatible 
> change to the client protocol, and is much less invasive. It minimizes the 
> scope of modification to just places where clients and Datanodes connect, vs 
> changing all uses of Datanode identifiers.
> New client and Datanode configuration options are introduced:
> - {{dfs.client.use.datanode.hostname}} indicates all client to datanode 
> connections should use the datanode hostname (as clients outside cluster may 
> not be able to route the IP)
> - {{dfs.datanode.use.datanode.hostname}} indicates whether Datanodes should 
> use hostnames when connecting to other Datanodes for data transfer
> If the configuration options are not used, there is no change in the current 
> behavior.
> I'm doing something similar to #1 btw in trunk in HDFS-3144 - refactoring the 
> use of DatanodeID to use the right field (IP, IP:xferPort, hostname, etc) 
> based on the context the ID is being used in, vs always using the IP:xferPort 
> as the Datanode's name, and using the name everywhere.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3179) failed to append data, DataStreamer throw an exception, "nodes.length != original.length + 1" on single datanode cluster

2012-04-02 Thread Zhanwei.Wang (Created) (JIRA)
failed to append data, DataStreamer throw an exception, "nodes.length != 
original.length + 1" on single datanode cluster


 Key: HDFS-3179
 URL: https://issues.apache.org/jira/browse/HDFS-3179
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.2
Reporter: Zhanwei.Wang
Priority: Critical


Create a single datanode cluster

disable permissions
enable webhfds
start hdfs
run the test script

expected result:
a file named "test" is created and the content is "testtest"

the result I got:
hdfs throw an exception on the second append operation.

{code}
./test.sh 
{"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed
 to add a datanode: nodes.length != original.length + 1, 
nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]"}}
{code}

Log in datanode:
{code}
2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
Exception
java.io.IOException: Failed to add a datanode: nodes.length != original.length 
+ 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to close 
file /test
java.io.IOException: Failed to add a datanode: nodes.length != original.length 
+ 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
{code}


test.sh
{code}
#!/bin/sh

echo "test" > test.txt

curl -L -X PUT "http://localhost:50070/webhdfs/v1/test?op=CREATE";

curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND";
curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND";

{code}




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3180) Add socket timeouts to webhdfs

2012-04-02 Thread Daryn Sharp (Created) (JIRA)
Add socket timeouts to webhdfs
--

 Key: HDFS-3180
 URL: https://issues.apache.org/jira/browse/HDFS-3180
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.23.0, 0.24.0
Reporter: Daryn Sharp


WebHDFS connections may indefinitely hang due to no timeouts on the connection. 
 WebHDFS should be adapted in a similar fashion to HDFS-3166 for hftp.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2185) HA: HDFS portion of ZK-based FailoverController

2012-04-02 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2185.
---

   Resolution: Fixed
Fix Version/s: Auto failover (HDFS-3042)
 Hadoop Flags: Reviewed

Committed to auto-failover branch (HDFS-3042). Please feel free to continue 
commenting on the design -- either here, or if you prefer, we can move the 
discussion to the umbrella task HDFS-3042. Apologies for having uploaded the 
document to this subtask instead of the supertask.

> HA: HDFS portion of ZK-based FailoverController
> ---
>
> Key: HDFS-2185
> URL: https://issues.apache.org/jira/browse/HDFS-2185
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: auto-failover, ha
>Affects Versions: 0.24.0, 0.23.3
>Reporter: Eli Collins
>Assignee: Todd Lipcon
> Fix For: Auto failover (HDFS-3042)
>
> Attachments: Failover_Controller.jpg, hdfs-2185.txt, hdfs-2185.txt, 
> hdfs-2185.txt, hdfs-2185.txt, hdfs-2185.txt, zkfc-design.pdf, 
> zkfc-design.pdf, zkfc-design.pdf, zkfc-design.tex
>
>
> This jira is for a ZK-based FailoverController daemon. The FailoverController 
> is a separate daemon from the NN that does the following:
> * Initiates leader election (via ZK) when necessary
> * Performs health monitoring (aka failure detection)
> * Performs fail-over (standby to active and active to standby transitions)
> * Heartbeats to ensure the liveness
> It should have the same/similar interface as the Linux HA RM to aid 
> pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3181) org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart fails intermittently

2012-04-02 Thread Colin Patrick McCabe (Created) (JIRA)
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart
 fails intermittently


 Key: HDFS-3181
 URL: https://issues.apache.org/jira/browse/HDFS-3181
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
 Attachments: testOut.txt

org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart
 seems to be failing intermittently on jenkins.

{code}
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart
Failing for the past 1 build (Since Failed#2163 )
Took 8.4 sec.
Error Message

Lease mismatch on /hardLeaseRecovery owned by HDFS_NameNode but is accessed by 
DFSClient_NONMAPREDUCE_1147689755_1  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2076)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2051)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:1983)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:492)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:311)
  at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42604)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:417)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:891)  at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1661)  at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1657)  at 
java.security.AccessController.doPrivileged(Native Method)  at 
javax.security.auth.Subject.doAs(Subject.java:396)  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1205)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1655) 

Stacktrace

org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: Lease mismatch on 
/hardLeaseRecovery owned by HDFS_NameNode but is accessed by 
DFSClient_NONMAPREDUCE_1147689755_1
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2076)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2051)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:1983)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:492)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:311)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42604)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:417)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:891)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1661)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1657)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1205)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1655)

at org.apache.hadoop.ipc.Client.call(Client.java:1159)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:185)
at $Proxy15.getAdditionalDatanode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy15.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:317)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:828)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatan

[jira] [Created] (HDFS-3182) Add class to manage JournalList

2012-04-02 Thread Suresh Srinivas (Created) (JIRA)
Add class to manage JournalList
---

 Key: HDFS-3182
 URL: https://issues.apache.org/jira/browse/HDFS-3182
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Reporter: Suresh Srinivas


See the comment for details of the JournalList ZooKeeper znode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira