Hadoop-Hdfs-22-branch - Build # 33 - Still Failing

2011-03-31 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-22-branch/33/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3511 lines...]
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target
 [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
 [echo]  Free Ports: startup-49683 / http-49684 / https-49685
 [echo] Please take a deep breath while Cargo gets the Tomcat for running 
the servlet tests...
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/tomcat-config
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/logs
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/reports
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -
   [cactus] Running tests against Tomcat 5.x @ http://localhost:49684
   [cactus] -
   [cactus] Deploying 
[/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/test.war]
 to 
[/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]  
jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]  and 
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.474 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.342 sec
   [cactus] Tomcat 5.x started on port [49684]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.294 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.331 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.836 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:749:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:730:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/build.xml:48:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/hdfsproxy/build.xml:343:
 Tests failed!

Total time: 50 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
#

Hadoop-Hdfs-trunk - Build # 623 - Still Failing

2011-03-31 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/623/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 710777 lines...]
[junit] 2011-03-31 12:33:18,494 INFO  datanode.DataNode 
(BlockReceiver.java:run(926)) - PacketResponder blk_8517475587862166522_1001 0 
: Thread is interrupted.
[junit] 2011-03-31 12:33:18,494 INFO  datanode.DataNode 
(DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads 
is 3
[junit] 2011-03-31 12:33:18,494 INFO  ipc.Server (Server.java:run(691)) - 
Stopping IPC Server Responder
[junit] 2011-03-31 12:33:18,495 INFO  datanode.DataNode 
(BlockReceiver.java:run(1010)) - PacketResponder 0 for block 
blk_8517475587862166522_1001 terminating
[junit] 2011-03-31 12:33:18,495 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(220))
 - FI: blockFileClose, datanode=DatanodeRegistration(127.0.0.1:51778, 
storageID=DS-1795582721-127.0.1.1-51778-1301574787630, infoPort=48804, 
ipcPort=48740)
[junit] 2011-03-31 12:33:18,496 ERROR datanode.DataNode 
(DataXceiver.java:run(132)) - DatanodeRegistration(127.0.0.1:51778, 
storageID=DS-1795582721-127.0.1.1-51778-1301574787630, infoPort=48804, 
ipcPort=48740):DataXceiver
[junit] java.lang.RuntimeException: java.lang.InterruptedException: sleep 
interrupted
[junit] at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:82)
[junit] at 
org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:346)
[junit] at 
org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
[junit] at 
org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:463)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:651)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:360)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390)
[junit] at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:332)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit] Caused by: java.lang.InterruptedException: sleep interrupted
[junit] at java.lang.Thread.sleep(Native Method)
[junit] at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
[junit] ... 11 more
[junit] 2011-03-31 12:33:18,497 INFO  datanode.DataNode 
(DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2011-03-31 12:33:18,598 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
[junit] 2011-03-31 12:33:18,598 INFO  datanode.DataNode 
(DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:51778, 
storageID=DS-1795582721-127.0.1.1-51778-1301574787630, infoPort=48804, 
ipcPort=48740):Finishing DataNode in: 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2011-03-31 12:33:18,598 INFO  ipc.Server (Server.java:stop(1626)) - 
Stopping server on 48740
[junit] 2011-03-31 12:33:18,598 INFO  datanode.DataNode 
(DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2011-03-31 12:33:18,599 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-03-31 12:33:18,599 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-03-31 12:33:18,599 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2011-03-31 12:33:18,701 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2011-03-31 12:33:18,701 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time 
for transactions(ms): 1Number of transactions batched in Syncs: 0 Number 

[jira] [Resolved] (HDFS-1778) Log Improvements in org.apache.hadoop.hdfs.server.datanode.BlockReceiver.

2011-03-31 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-1778.
---

Resolution: Not A Problem

> Log Improvements in org.apache.hadoop.hdfs.server.datanode.BlockReceiver.
> -
>
> Key: HDFS-1778
> URL: https://issues.apache.org/jira/browse/HDFS-1778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-1778.patch
>
>
> Here we used so many '+'  operators to construct the log messages. To Avoid 
> the unnecessary string objects creation, we can use string builder and append.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-1189) Quota counts missed between clear quota and set quota

2011-03-31 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George reopened HDFS-1189:
---


Need to submit patch for .20 release

> Quota counts missed between clear quota and set quota
> -
>
> Key: HDFS-1189
> URL: https://issues.apache.org/jira/browse/HDFS-1189
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.2, 0.22.0
>Reporter: Kang Xiao
>Assignee: John George
>  Labels: hdfs, quota
> Fix For: 0.21.1, Federation Branch, 0.22.0, 0.23.0
>
> Attachments: HDFS-1189.patch, HDFS-1189.patch, HDFS-1189.patch, 
> hdfs-1189-1.patch
>
>
> HDFS Quota counts will be missed between a clear quota operation and a set 
> quota.
> When setting quota for a dir, the INodeDirectory will be replaced by 
> INodeDirectoryWithQuota and dir.isQuotaSet() becomes true. When 
> INodeDirectoryWithQuota  is newly created, quota counting will be performed. 
> However, when clearing quota, the quota conf is set to -1 and 
> dir.isQuotaSet() becomes false while INodeDirectoryWithQuota will NOT be 
> replaced back to INodeDirectory.
> FSDirectory.updateCount just update the quota count for inodes that 
> isQuotaSet() is true. So after clear quota for a dir, its quota counts will 
> not be updated and it's reasonable. But when re seting quota for this dir, 
> quota counting will not be performed and some counts will be missed.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-1793) Add code to inspect a storage directory with txid-based filenames

2011-03-31 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-1793.
---

Resolution: Fixed

> Add code to inspect a storage directory with txid-based filenames
> -
>
> Key: HDFS-1793
> URL: https://issues.apache.org/jira/browse/HDFS-1793
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: Edit log branch (HDFS-1073)
>
> Attachments: HDFS-1793.txt, hdfs-1793.txt
>
>
> After HDFS-1538, the startup of the NN is done with the following phases:
> - Inspect storage directories
> - Formulate a "load plan", where a load plan is:
> -- A set of recovery actions (eg deleting half-uploaded checkpoint uplods)
> -- The fsimage to restore
> -- A sequence of edit logs to replay
> - Run the recovery
> - Load specified image
> - Load specified edits.
> This JIRA adds code to inspect a set of storage directories where the image 
> and edits files are named according to the scheme designed in HDFS-1073, and 
> formulate such a load plan.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-1794) Add code to list which edit logs are available on a remote NN

2011-03-31 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-1794.
---

Resolution: Fixed

> Add code to list which edit logs are available on a remote NN
> -
>
> Key: HDFS-1794
> URL: https://issues.apache.org/jira/browse/HDFS-1794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: Edit log branch (HDFS-1073)
>
> Attachments: hdfs-1794.txt, hdfs-1794.txt
>
>
> When the 2NN or BN needs to sync up with the primary NN, it may need to 
> download several different edits files since the NN may roll whenever it 
> likes. This JIRA adds a new type called RemoteEditLogManifest to list the 
> available edit log files since a given transaction ID. This may also be 
> useful for monitoring or backup tools down the road.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1799) Refactor log rolling and filename management out of FSEditLog

2011-03-31 Thread Todd Lipcon (JIRA)
Refactor log rolling and filename management out of FSEditLog
-

 Key: HDFS-1799
 URL: https://issues.apache.org/jira/browse/HDFS-1799
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Todd Lipcon
Assignee: Todd Lipcon




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira