See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/59/

------------------------------------------
[...truncated 226537 lines...]
    [junit] 2009-08-23 12:47:51,082 INFO  datanode.DataNode 
(DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 
msecs
    [junit] 2009-08-23 12:47:51,083 INFO  datanode.DataNode 
(DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-23 12:47:51,230 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(122)) - Storage directory 
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6
  is not formatted.
    [junit] 2009-08-23 12:47:51,230 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-23 12:47:51,539 INFO  datanode.DataNode 
(FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-23 12:47:51,539 INFO  datanode.DataNode 
(DataNode.java:startDataNode(326)) - Opened info server at 48509
    [junit] 2009-08-23 12:47:51,539 INFO  datanode.DataNode 
(DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-23 12:47:51,540 INFO  datanode.DirectoryScanner 
(DirectoryScanner.java:<init>(133)) - scan starts at 1251036241540 with 
interval 21600000
    [junit] 2009-08-23 12:47:51,541 INFO  http.HttpServer 
(HttpServer.java:start(425)) - Port returned by 
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the 
listener on 0
    [junit] 2009-08-23 12:47:51,541 INFO  http.HttpServer 
(HttpServer.java:start(430)) - listener.getLocalPort() returned 52070 
webServer.getConnectors()[0].getLocalPort() returned 52070
    [junit] 2009-08-23 12:47:51,542 INFO  http.HttpServer 
(HttpServer.java:start(463)) - Jetty bound to port 52070
    [junit] 2009-08-23 12:47:51,542 INFO  mortbay.log (?:invoke(?)) - 
jetty-6.1.14
    [junit] 2009-08-23 12:47:51,601 INFO  mortbay.log (?:invoke(?)) - Started 
selectchannelconnec...@localhost:52070
    [junit] 2009-08-23 12:47:51,601 INFO  jvm.JvmMetrics 
(JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with 
processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-23 12:47:51,602 INFO  metrics.RpcMetrics 
(RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, 
port=48832
    [junit] 2009-08-23 12:47:51,603 INFO  ipc.Server (Server.java:run(474)) - 
IPC Server Responder: starting
    [junit] 2009-08-23 12:47:51,603 INFO  ipc.Server (Server.java:run(939)) - 
IPC Server handler 0 on 48832: starting
    [junit] 2009-08-23 12:47:51,603 INFO  ipc.Server (Server.java:run(313)) - 
IPC Server listener on 48832: starting
    [junit] 2009-08-23 12:47:51,604 INFO  datanode.DataNode 
(DataNode.java:startDataNode(404)) - dnRegistration = 
DatanodeRegistration(vesta.apache.org:48509, storageID=, infoPort=52070, 
ipcPort=48832)
    [junit] 2009-08-23 12:47:51,605 INFO  hdfs.StateChange 
(FSNamesystem.java:registerDatanode(1774)) - BLOCK* 
NameSystem.registerDatanode: node registration from 127.0.0.1:48509 storage 
DS-830920231-67.195.138.9-48509-1251031671604
    [junit] 2009-08-23 12:47:51,606 INFO  net.NetworkTopology 
(NetworkTopology.java:add(327)) - Adding a new node: 
/default-rack/127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,650 INFO  datanode.DataNode 
(DataNode.java:register(571)) - New storage id 
DS-830920231-67.195.138.9-48509-1251031671604 is assigned to data-node 
127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,651 INFO  datanode.DataNode 
(DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:48509, 
storageID=DS-830920231-67.195.138.9-48509-1251031671604, infoPort=52070, 
ipcPort=48832)In DataNode.run, data = 
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
 
    [junit] 2009-08-23 12:47:51,651 INFO  datanode.DataNode 
(DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec 
Initial delay: 0msec
    [junit] 2009-08-23 12:47:51,688 INFO  datanode.DataNode 
(DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 2 
msecs
    [junit] 2009-08-23 12:47:51,688 INFO  datanode.DataNode 
(DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-23 12:47:51,841 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1   
cmd=create      src=/pipeline_Fi_16/foo dst=null        
perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-23 12:47:51,844 INFO  hdfs.StateChange 
(FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: 
/pipeline_Fi_16/foo. blk_-3823116340774608643_1001
    [junit] 2009-08-23 12:47:51,883 INFO  protocol.ClientProtocolAspects 
(ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(35))
 - FI: addBlock Pipeline[127.0.0.1:33089, 127.0.0.1:48509, 127.0.0.1:42956]
    [junit] 2009-08-23 12:47:51,884 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,885 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,885 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_-3823116340774608643_1001 src: /127.0.0.1:59253 dest: /127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,886 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,887 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,887 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_-3823116340774608643_1001 src: /127.0.0.1:45683 dest: /127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,888 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,888 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,888 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_-3823116340774608643_1001 src: /127.0.0.1:48571 dest: /127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,889 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
 - FI: statusRead SUCCESS, datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,890 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
 - FI: statusRead SUCCESS, datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,890 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,891 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,891 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,891 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,891 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,892 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,892 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,892 INFO  fi.FiTestUtil 
(DataTransferTestUtil.java:run(151)) - FI: pipeline_Fi_16, index=2, 
datanode=127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,892 WARN  datanode.DataNode 
(DataNode.java:checkDiskError(702)) - checkDiskError: exception: 
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: 
pipeline_Fi_16, index=2, datanode=127.0.0.1:42956
    [junit]     at 
org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit]     at 
org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit]     at 
org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-23 12:47:51,893 INFO  mortbay.log (?:invoke(?)) - Completed 
FSVolumeSet.checkDirs. Removed=0volumes. List of current volumes: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current
 
    [junit] 2009-08-23 12:47:51,893 INFO  datanode.DataNode 
(BlockReceiver.java:receiveBlock(569)) - Exception in receiveBlock for block 
blk_-3823116340774608643_1001 
org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, 
index=2, datanode=127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,894 INFO  datanode.DataNode 
(BlockReceiver.java:lastDataNodeRun(782)) - PacketResponder 0 for block 
blk_-3823116340774608643_1001 Interrupted.
    [junit] 2009-08-23 12:47:51,894 INFO  datanode.DataNode 
(BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block 
blk_-3823116340774608643_1001 terminating
    [junit] 2009-08-23 12:47:51,894 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-3823116340774608643_1001 
received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: 
FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,894 ERROR datanode.DataNode 
(DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:42956, 
storageID=DS-1038161714-67.195.138.9-42956-1251031670395, infoPort=48218, 
ipcPort=60502):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: 
pipeline_Fi_16, index=2, datanode=127.0.0.1:42956
    [junit]     at 
org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit]     at 
org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit]     at 
org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-23 12:47:51,894 INFO  datanode.DataNode 
(BlockReceiver.java:run(917)) - PacketResponder blk_-3823116340774608643_1001 1 
Exception java.io.EOFException
    [junit]     at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit]     at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:879)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-23 12:47:51,926 INFO  datanode.DataNode 
(BlockReceiver.java:run(1025)) - PacketResponder 1 for block 
blk_-3823116340774608643_1001 terminating
    [junit] 2009-08-23 12:47:51,926 INFO  datanode.DataNode 
(BlockReceiver.java:run(1025)) - PacketResponder 2 for block 
blk_-3823116340774608643_1001 terminating
    [junit] 2009-08-23 12:47:51,927 WARN  hdfs.DFSClient 
(DFSClient.java:run(2601)) - DFSOutputStream ResponseProcessor exception  for 
block blk_-3823116340774608643_1001java.io.IOException: Bad response ERROR for 
block blk_-3823116340774608643_1001 from datanode 127.0.0.1:42956
    [junit]     at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2581)
    [junit] 
    [junit] 2009-08-23 12:47:51,927 WARN  hdfs.DFSClient 
(DFSClient.java:processDatanodeError(2630)) - Error Recovery for block 
blk_-3823116340774608643_1001 bad datanode[2] 127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,927 WARN  hdfs.DFSClient 
(DFSClient.java:processDatanodeError(2674)) - Error Recovery for block 
blk_-3823116340774608643_1001 in pipeline 127.0.0.1:33089, 127.0.0.1:48509, 
127.0.0.1:42956: bad datanode 127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,930 INFO  datanode.DataNode 
(DataNode.java:logRecoverBlock(1727)) - Client calls 
recoverBlock(block=blk_-3823116340774608643_1001, targets=[127.0.0.1:33089, 
127.0.0.1:48509])
    [junit] 2009-08-23 12:47:51,935 INFO  datanode.DataNode 
(DataNode.java:updateBlock(1537)) - 
oldblock=blk_-3823116340774608643_1001(length=1), 
newblock=blk_-3823116340774608643_1002(length=1), datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,937 INFO  datanode.DataNode 
(DataNode.java:updateBlock(1537)) - 
oldblock=blk_-3823116340774608643_1001(length=1), 
newblock=blk_-3823116340774608643_1002(length=1), datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,938 INFO  namenode.FSNamesystem 
(FSNamesystem.java:commitBlockSynchronization(1613)) - 
commitBlockSynchronization(lastblock=blk_-3823116340774608643_1001, 
newgenerationstamp=1002, newlength=1, newtargets=[127.0.0.1:33089, 
127.0.0.1:48509], closeFile=false, deleteBlock=false)
    [junit] 2009-08-23 12:47:51,939 INFO  namenode.FSNamesystem 
(FSNamesystem.java:commitBlockSynchronization(1677)) - 
commitBlockSynchronization(blk_-3823116340774608643_1002) successful
    [junit] 2009-08-23 12:47:51,941 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,941 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,941 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_-3823116340774608643_1002 src: /127.0.0.1:59258 dest: /127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,941 INFO  datanode.DataNode 
(FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append 
blk_-3823116340774608643_1002
    [junit] 2009-08-23 12:47:51,942 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,942 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,942 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_-3823116340774608643_1002 src: /127.0.0.1:45688 dest: /127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,943 INFO  datanode.DataNode 
(FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append 
blk_-3823116340774608643_1002
    [junit] 2009-08-23 12:47:51,943 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
 - FI: statusRead SUCCESS, datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,944 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,944 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,944 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,945 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,945 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
 - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,946 INFO  DataNode.clienttrace 
(BlockReceiver.java:lastDataNodeRun(822)) - src: /127.0.0.1:45688, dest: 
/127.0.0.1:48509, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_1407762613, 
offset: 0, srvID: DS-830920231-67.195.138.9-48509-1251031671604, blockid: 
blk_-3823116340774608643_1002, duration: 2397143
    [junit] 2009-08-23 12:47:51,987 INFO  datanode.DataNode 
(BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block 
blk_-3823116340774608643_1002 terminating
    [junit] 2009-08-23 12:47:51,987 INFO  hdfs.StateChange 
(BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: 
blockMap updated: 127.0.0.1:48509 is added to blk_-3823116340774608643_1002 
size 1
    [junit] 2009-08-23 12:47:51,989 INFO  DataNode.clienttrace 
(BlockReceiver.java:run(955)) - src: /127.0.0.1:59258, dest: /127.0.0.1:33089, 
bytes: 1, op: HDFS_WRITE, cliID: DFSClient_1407762613, offset: 0, srvID: 
DS-2088916918-67.195.138.9-33089-1251031670997, blockid: 
blk_-3823116340774608643_1002, duration: 45005337
    [junit] 2009-08-23 12:47:51,990 INFO  datanode.DataNode 
(BlockReceiver.java:run(1025)) - PacketResponder 1 for block 
blk_-3823116340774608643_1002 terminating
    [junit] 2009-08-23 12:47:51,990 INFO  hdfs.StateChange 
(BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: 
blockMap updated: 127.0.0.1:33089 is added to blk_-3823116340774608643_1002 
size 1
    [junit] 2009-08-23 12:47:51,992 INFO  hdfs.StateChange 
(FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: 
file /pipeline_Fi_16/foo is closed by DFSClient_1407762613
    [junit] 2009-08-23 12:47:52,003 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1   
cmd=open        src=/pipeline_Fi_16/foo dst=null        perm=null
    [junit] 2009-08-23 12:47:52,005 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
 - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:48509
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-23 12:47:52,006 INFO  DataNode.clienttrace 
(BlockSender.java:sendBlock(417)) - src: /127.0.0.1:48509, dest: 
/127.0.0.1:45689, bytes: 5, op: HDFS_READ, cliID: DFSClient_1407762613, offset: 
0, srvID: DS-830920231-67.195.138.9-48509-1251031671604, blockid: 
blk_-3823116340774608643_1002, duration: 250492
    [junit] 2009-08-23 12:47:52,007 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
 - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:52,031 INFO  ipc.Client 
(Client.java:handleConnectionFailure(395)) - Retrying connect to server: 
localhost/127.0.0.1:49694. Already tried 1 time(s).
    [junit] 2009-08-23 12:47:52,108 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 48832
    [junit] 2009-08-23 12:47:52,108 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 0 on 48832: exiting
    [junit] 2009-08-23 12:47:52,108 INFO  ipc.Server (Server.java:run(352)) - 
Stopping IPC Server listener on 48832
    [junit] 2009-08-23 12:47:52,109 INFO  ipc.Server (Server.java:run(539)) - 
Stopping IPC Server Responder
    [junit] 2009-08-23 12:47:52,109 WARN  datanode.DataNode 
(DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:48509, 
storageID=DS-830920231-67.195.138.9-48509-1251031671604, infoPort=52070, 
ipcPort=48832):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit]     at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit]     at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit]     at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-23 12:47:52,109 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] 2009-08-23 12:47:52,110 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-23 12:47:52,110 INFO  datanode.DataNode 
(DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:48509, 
storageID=DS-830920231-67.195.138.9-48509-1251031671604, infoPort=52070, 
ipcPort=48832):Finishing DataNode in: 
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
 
    [junit] 2009-08-23 12:47:52,111 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 48832
    [junit] 2009-08-23 12:47:52,111 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-23 12:47:52,231 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 58600
    [junit] 2009-08-23 12:47:52,231 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 0 on 58600: exiting
    [junit] 2009-08-23 12:47:52,233 INFO  ipc.Server (Server.java:run(352)) - 
Stopping IPC Server listener on 58600
    [junit] 2009-08-23 12:47:52,233 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 1
    [junit] 2009-08-23 12:47:52,233 INFO  ipc.Server (Server.java:run(539)) - 
Stopping IPC Server Responder
    [junit] 2009-08-23 12:47:52,233 WARN  datanode.DataNode 
(DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:33089, 
storageID=DS-2088916918-67.195.138.9-33089-1251031670997, infoPort=49467, 
ipcPort=58600):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit]     at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit]     at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit]     at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-23 12:47:52,235 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] 2009-08-23 12:47:52,236 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-23 12:47:52,237 INFO  datanode.DataNode 
(DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:33089, 
storageID=DS-2088916918-67.195.138.9-33089-1251031670997, infoPort=49467, 
ipcPort=58600):Finishing DataNode in: 
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
 
    [junit] 2009-08-23 12:47:52,237 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 58600
    [junit] 2009-08-23 12:47:52,237 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-23 12:47:52,340 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 60502
    [junit] 2009-08-23 12:47:52,341 INFO  ipc.Server (Server.java:run(352)) - 
Stopping IPC Server listener on 60502
    [junit] 2009-08-23 12:47:52,342 INFO  ipc.Server (Server.java:run(539)) - 
Stopping IPC Server Responder
    [junit] 2009-08-23 12:47:52,342 WARN  datanode.DataNode 
(DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:42956, 
storageID=DS-1038161714-67.195.138.9-42956-1251031670395, infoPort=48218, 
ipcPort=60502):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit]     at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit]     at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit]     at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-23 12:47:52,342 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 1
    [junit] 2009-08-23 12:47:52,341 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 0 on 60502: exiting
    [junit] 2009-08-23 12:47:52,343 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-23 12:47:52,343 INFO  datanode.DataNode 
(DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:42956, 
storageID=DS-1038161714-67.195.138.9-42956-1251031670395, infoPort=48218, 
ipcPort=60502):Finishing DataNode in: 
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}
 
    [junit] 2009-08-23 12:47:52,343 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 60502
    [junit] 2009-08-23 12:47:52,343 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] 2009-08-23 12:47:52,473 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(67)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-23 12:47:52,473 INFO  namenode.FSNamesystem 
(FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time 
for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of 
syncs: 2 SyncTimes(ms): 56 23 
    [junit] 2009-08-23 12:47:52,473 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2077)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 46463
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 0 on 46463: exiting
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 1 on 46463: exiting
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 3 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 9 on 46463: exiting
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 6 on 46463: exiting
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 2 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(352)) - 
Stopping IPC Server listener on 46463
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 8 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 5 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 7 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 4 on 46463: exiting
    [junit] 2009-08-23 12:47:52,497 INFO  ipc.Server (Server.java:run(539)) - 
Stopping IPC Server Responder
    [junit] Tests run: 7, Failures: 0, Errors: 1, Time elapsed: 84.843 sec
    [junit] 2009-08-23 12:47:52,544 ERROR hdfs.DFSClient 
(DFSClient.java:close(1084)) - Exception closing file /pipeline_Fi_12/foo : 
java.io.IOException: Bad connect ack with firstBadLink as 127.0.0.1:58044
    [junit] java.io.IOException: Bad connect ack with firstBadLink as 
127.0.0.1:58044
    [junit]     at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.createBlockOutputStream(DFSClient.java:2865)
    [junit]     at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSClient.java:2789)
    [junit]     at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2399)
    [junit] 2009-08-23 12:47:52,545 INFO  ipc.Client 
(Client.java:handleConnectionFailure(395)) - Retrying connect to server: 
localhost/127.0.0.1:49694. Already tried 2 time(s).
    [junit] 2009-08-23 12:47:53,545 INFO  ipc.Client 
(Client.java:handleConnectionFailure(395)) - Retrying connect to server: 
localhost/127.0.0.1:49694. Already tried 3 time(s).
    [junit] 2009-08-23 12:47:54,545 INFO  ipc.Client 
(Client.java:handleConnectionFailure(395)) - Retrying connect to server: 
localhost/127.0.0.1:49694. Already tried 4 time(s).
    [junit] 2009-08-23 12:47:55,546 INFO  ipc.Client 
(Client.java:handleConnectionFailure(395)) - Retrying connect to server: 
localhost/127.0.0.1:49694. Already tried 5 time(s).
    [junit] 2009-08-23 12:47:56,546 INFO  ipc.Client 
(Client.java:handleConnectionFailure(395)) - Retrying connect to server: 
localhost/127.0.0.1:49694. Already tried 6 time(s).
    [junit] 2009-08-23 12:47:57,546 INFO  ipc.Client 
(Client.java:handleConnectionFailure(395)) - Retrying connect to server: 
localhost/127.0.0.1:49694. Already tried 7 time(s).
    [junit] 2009-08-23 12:47:58,547 INFO  ipc.Client 
(Client.java:handleConnectionFailure(395)) - Retrying connect to server: 
localhost/127.0.0.1:49694. Already tried 8 time(s).
    [junit] 2009-08-23 12:47:59,547 INFO  ipc.Client 
(Client.java:handleConnectionFailure(395)) - Retrying connect to server: 
localhost/127.0.0.1:49694. Already tried 9 time(s).
    [junit] 2009-08-23 12:47:59,548 WARN  hdfs.DFSClient 
(DFSClient.java:run(1140)) - Problem renewing lease for DFSClient_-805227971 
for a period of 0 seconds. Will retry shortly...
    [junit] java.net.ConnectException: Call to localhost/127.0.0.1:49694 failed 
on connection exception: java.net.ConnectException: Connection refused
    [junit]     at org.apache.hadoop.ipc.Client.wrapException(Client.java:793)
    [junit]     at org.apache.hadoop.ipc.Client.call(Client.java:769)
    [junit]     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:223)
    [junit]     at $Proxy5.renewLease(Unknown Source)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit]     at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:597)
    [junit]     at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit]     at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit]     at $Proxy5.renewLease(Unknown Source)
    [junit]     at 
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.renew(DFSClient.java:1115)
    [junit]     at 
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1131)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] Caused by: java.net.ConnectException: Connection refused
    [junit]     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    [junit]     at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
    [junit]     at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    [junit]     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:368)
    [junit]     at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:325)
    [junit]     at 
org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:197)
    [junit]     at org.apache.hadoop.ipc.Client.getConnection(Client.java:886)
    [junit]     at org.apache.hadoop.ipc.Client.call(Client.java:746)
    [junit]     ... 12 more
    [junit] Test 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol FAILED

checkfailure:
    [touch] Creating 
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/testsfailed
 

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml 
:722: The following error occurred while executing this line:
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml 
:363: The following error occurred while executing this line:
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml 
:646: The following error occurred while executing this line:
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml 
:641: The following error occurred while executing this line:
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml 
:705: Tests failed!

Total time: 74 minutes 6 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...

Reply via email to