See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/45/

------------------------------------------
[...truncated 317708 lines...]
    [junit] 2009-08-10 15:21:36,540 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-10 15:21:36,943 INFO  datanode.DataNode 
(FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-10 15:21:36,944 INFO  datanode.DataNode 
(DataNode.java:startDataNode(326)) - Opened info server at 49054
    [junit] 2009-08-10 15:21:36,944 INFO  datanode.DataNode 
(DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-10 15:21:36,944 INFO  datanode.DirectoryScanner 
(DirectoryScanner.java:<init>(133)) - scan starts at 1249929725944 with 
interval 21600000
    [junit] 2009-08-10 15:21:36,946 INFO  http.HttpServer 
(HttpServer.java:start(425)) - Port returned by 
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the 
listener on 0
    [junit] 2009-08-10 15:21:36,946 INFO  http.HttpServer 
(HttpServer.java:start(430)) - listener.getLocalPort() returned 46168 
webServer.getConnectors()[0].getLocalPort() returned 46168
    [junit] 2009-08-10 15:21:36,946 INFO  http.HttpServer 
(HttpServer.java:start(463)) - Jetty bound to port 46168
    [junit] 2009-08-10 15:21:36,946 INFO  mortbay.log (?:invoke(?)) - 
jetty-6.1.14
    [junit] 2009-08-10 15:21:37,070 INFO  mortbay.log (?:invoke(?)) - Started 
selectchannelconnec...@localhost:46168
    [junit] 2009-08-10 15:21:37,071 INFO  jvm.JvmMetrics 
(JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with 
processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-10 15:21:37,072 INFO  metrics.RpcMetrics 
(RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, 
port=34492
    [junit] 2009-08-10 15:21:37,073 INFO  ipc.Server (Server.java:run(474)) - 
IPC Server Responder: starting
    [junit] 2009-08-10 15:21:37,073 INFO  datanode.DataNode 
(DataNode.java:startDataNode(404)) - dnRegistration = 
DatanodeRegistration(vesta.apache.org:49054, storageID=, infoPort=46168, 
ipcPort=34492)
    [junit] 2009-08-10 15:21:37,073 INFO  ipc.Server (Server.java:run(939)) - 
IPC Server handler 0 on 34492: starting
    [junit] 2009-08-10 15:21:37,073 INFO  ipc.Server (Server.java:run(313)) - 
IPC Server listener on 34492: starting
    [junit] 2009-08-10 15:21:37,094 INFO  hdfs.StateChange 
(FSNamesystem.java:registerDatanode(1774)) - BLOCK* 
NameSystem.registerDatanode: node registration from 127.0.0.1:49054 storage 
DS-1109244666-67.195.138.9-49054-1249917697094
    [junit] 2009-08-10 15:21:37,095 INFO  net.NetworkTopology 
(NetworkTopology.java:add(327)) - Adding a new node: 
/default-rack/127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,155 INFO  datanode.DataNode 
(DataNode.java:register(571)) - New storage id 
DS-1109244666-67.195.138.9-49054-1249917697094 is assigned to data-node 
127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,155 INFO  datanode.DataNode 
(DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:49054, 
storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, 
ipcPort=34492)In DataNode.run, data = 
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
 
    [junit] Starting DataNode 2 with dfs.data.dir: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6
 
    [junit] 2009-08-10 15:21:37,156 INFO  datanode.DataNode 
(DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec 
Initial delay: 0msec
    [junit] 2009-08-10 15:21:37,164 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(122)) - Storage directory 
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5
  is not formatted.
    [junit] 2009-08-10 15:21:37,164 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-10 15:21:37,193 INFO  datanode.DataNode 
(DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 1 
msecs
    [junit] 2009-08-10 15:21:37,194 INFO  datanode.DataNode 
(DataNode.java:offerService(782)) - Starting Periodic block scanner.
    [junit] 2009-08-10 15:21:37,350 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(122)) - Storage directory 
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6
  is not formatted.
    [junit] 2009-08-10 15:21:37,350 INFO  common.Storage 
(DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-10 15:21:37,605 INFO  datanode.DataNode 
(FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-10 15:21:37,605 INFO  datanode.DataNode 
(DataNode.java:startDataNode(326)) - Opened info server at 38273
    [junit] 2009-08-10 15:21:37,605 INFO  datanode.DataNode 
(DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-10 15:21:37,606 INFO  datanode.DirectoryScanner 
(DirectoryScanner.java:<init>(133)) - scan starts at 1249929966606 with 
interval 21600000
    [junit] 2009-08-10 15:21:37,607 INFO  http.HttpServer 
(HttpServer.java:start(425)) - Port returned by 
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the 
listener on 0
    [junit] 2009-08-10 15:21:37,607 INFO  http.HttpServer 
(HttpServer.java:start(430)) - listener.getLocalPort() returned 47864 
webServer.getConnectors()[0].getLocalPort() returned 47864
    [junit] 2009-08-10 15:21:37,608 INFO  http.HttpServer 
(HttpServer.java:start(463)) - Jetty bound to port 47864
    [junit] 2009-08-10 15:21:37,608 INFO  mortbay.log (?:invoke(?)) - 
jetty-6.1.14
    [junit] 2009-08-10 15:21:37,673 INFO  mortbay.log (?:invoke(?)) - Started 
selectchannelconnec...@localhost:47864
    [junit] 2009-08-10 15:21:37,673 INFO  jvm.JvmMetrics 
(JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with 
processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-10 15:21:37,675 INFO  metrics.RpcMetrics 
(RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, 
port=35997
    [junit] 2009-08-10 15:21:37,675 INFO  ipc.Server (Server.java:run(474)) - 
IPC Server Responder: starting
    [junit] 2009-08-10 15:21:37,675 INFO  ipc.Server (Server.java:run(313)) - 
IPC Server listener on 35997: starting
    [junit] 2009-08-10 15:21:37,676 INFO  ipc.Server (Server.java:run(939)) - 
IPC Server handler 0 on 35997: starting
    [junit] 2009-08-10 15:21:37,676 INFO  datanode.DataNode 
(DataNode.java:startDataNode(404)) - dnRegistration = 
DatanodeRegistration(vesta.apache.org:38273, storageID=, infoPort=47864, 
ipcPort=35997)
    [junit] 2009-08-10 15:21:37,677 INFO  hdfs.StateChange 
(FSNamesystem.java:registerDatanode(1774)) - BLOCK* 
NameSystem.registerDatanode: node registration from 127.0.0.1:38273 storage 
DS-124577835-67.195.138.9-38273-1249917697677
    [junit] 2009-08-10 15:21:37,677 INFO  net.NetworkTopology 
(NetworkTopology.java:add(327)) - Adding a new node: 
/default-rack/127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,726 INFO  datanode.DataNode 
(DataNode.java:register(571)) - New storage id 
DS-124577835-67.195.138.9-38273-1249917697677 is assigned to data-node 
127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,726 INFO  datanode.DataNode 
(DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:38273, 
storageID=DS-124577835-67.195.138.9-38273-1249917697677, infoPort=47864, 
ipcPort=35997)In DataNode.run, data = 
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
 
    [junit] 2009-08-10 15:21:37,727 INFO  datanode.DataNode 
(DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec 
Initial delay: 0msec
    [junit] 2009-08-10 15:21:37,768 INFO  datanode.DataNode 
(DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 1 
msecs
    [junit] 2009-08-10 15:21:37,768 INFO  datanode.DataNode 
(DataNode.java:offerService(782)) - Starting Periodic block scanner.
    [junit] 2009-08-10 15:21:37,808 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1   
cmd=create      src=/testPipelineFi15/foo       dst=null        
perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-10 15:21:37,811 INFO  hdfs.StateChange 
(FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: 
/testPipelineFi15/foo. blk_6077790398764684064_1001
    [junit] 2009-08-10 15:21:37,843 INFO  protocol.ClientProtocolAspects 
(ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(32))
 - FI: addBlock Pipeline[127.0.0.1:38273, 127.0.0.1:49054, 127.0.0.1:52882]
    [junit] 2009-08-10 15:21:37,845 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,845 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,846 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_6077790398764684064_1001 src: /127.0.0.1:35698 dest: /127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,847 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,847 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,848 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_6077790398764684064_1001 src: /127.0.0.1:59555 dest: /127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,849 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,849 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,849 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_6077790398764684064_1001 src: /127.0.0.1:38921 dest: /127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,850 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60))
 - FI: statusRead SUCCESS, datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,850 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60))
 - FI: statusRead SUCCESS, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,852 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
 - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,853 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
 - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,853 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
 - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,854 INFO  fi.FiTestUtil 
(DataTransferTestUtil.java:run(158)) - FI: testPipelineFi15, index=1, 
datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,854 INFO  datanode.DataNode 
(BlockReceiver.java:handleMirrorOutError(185)) - 
DatanodeRegistration(127.0.0.1:49054, 
storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, 
ipcPort=34492):Exception writing block blk_6077790398764684064_1001 to mirror 
127.0.0.1:52882
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: 
testPipelineFi15, index=1, datanode=127.0.0.1:49054
    [junit]     at 
org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit]     at 
org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit]     at 
org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-10 15:21:37,854 INFO  datanode.DataNode 
(BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block 
blk_6077790398764684064_1001 
org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: 
testPipelineFi15, index=1, datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,855 INFO  datanode.DataNode 
(BlockReceiver.java:run(907)) - PacketResponder blk_6077790398764684064_1001 1 
Exception java.io.InterruptedIOException: Interruped while waiting for IO on 
channel java.nio.channels.SocketChannel[connected local=/127.0.0.1:38921 
remote=/127.0.0.1:52882]. 59997 millis timeout left.
    [junit]     at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    [junit]     at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    [junit]     at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    [junit]     at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    [junit]     at java.io.DataInputStream.readFully(DataInputStream.java:178)
    [junit]     at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-10 15:21:37,855 INFO  datanode.DataNode 
(BlockReceiver.java:run(922)) - PacketResponder blk_6077790398764684064_1001 1 
: Thread is interrupted.
    [junit] 2009-08-10 15:21:37,855 INFO  datanode.DataNode 
(BlockReceiver.java:run(1009)) - PacketResponder 1 for block 
blk_6077790398764684064_1001 terminating
    [junit] 2009-08-10 15:21:37,855 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(358)) - writeBlock blk_6077790398764684064_1001 
received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: 
FI: testPipelineFi15, index=1, datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,855 ERROR datanode.DataNode 
(DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:49054, 
storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, 
ipcPort=34492):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: 
testPipelineFi15, index=1, datanode=127.0.0.1:49054
    [junit]     at 
org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit]     at 
org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit]     at 
org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-10 15:21:37,856 INFO  datanode.DataNode 
(BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block 
blk_6077790398764684064_1001 java.io.EOFException: while trying to read 65557 
bytes
    [junit] 2009-08-10 15:21:37,856 INFO  datanode.DataNode 
(BlockReceiver.java:lastDataNodeRun(779)) - PacketResponder 0 for block 
blk_6077790398764684064_1001 Interrupted.
    [junit] 2009-08-10 15:21:37,857 INFO  datanode.DataNode 
(BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block 
blk_6077790398764684064_1001 terminating
    [junit] 2009-08-10 15:21:37,857 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(358)) - writeBlock blk_6077790398764684064_1001 
received exception java.io.EOFException: while trying to read 65557 bytes
    [junit] 2009-08-10 15:21:37,857 ERROR datanode.DataNode 
(DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:52882, 
storageID=DS-593760917-67.195.138.9-52882-1249917696278, infoPort=35631, 
ipcPort=41464):DataXceiver
    [junit] java.io.EOFException: while trying to read 65557 bytes
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:271)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:315)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:379)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-10 15:21:37,854 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
 - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,857 INFO  datanode.DataNode 
(BlockReceiver.java:run(907)) - PacketResponder blk_6077790398764684064_1001 2 
Exception java.io.EOFException
    [junit]     at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit]     at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-10 15:21:37,859 INFO  datanode.DataNode 
(BlockReceiver.java:run(1009)) - PacketResponder 2 for block 
blk_6077790398764684064_1001 terminating
    [junit] 2009-08-10 15:21:37,859 WARN  hdfs.DFSClient 
(DFSClient.java:run(2593)) - DFSOutputStream ResponseProcessor exception  for 
block blk_6077790398764684064_1001java.io.IOException: Bad response ERROR for 
block blk_6077790398764684064_1001 from datanode 127.0.0.1:49054
    [junit]     at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2573)
    [junit] 
    [junit] 2009-08-10 15:21:37,860 WARN  hdfs.DFSClient 
(DFSClient.java:processDatanodeError(2622)) - Error Recovery for block 
blk_6077790398764684064_1001 bad datanode[1] 127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,860 WARN  hdfs.DFSClient 
(DFSClient.java:processDatanodeError(2666)) - Error Recovery for block 
blk_6077790398764684064_1001 in pipeline 127.0.0.1:38273, 127.0.0.1:49054, 
127.0.0.1:52882: bad datanode 127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,862 INFO  datanode.DataNode 
(DataNode.java:logRecoverBlock(1700)) - Client calls 
recoverBlock(block=blk_6077790398764684064_1001, targets=[127.0.0.1:38273, 
127.0.0.1:52882])
    [junit] 2009-08-10 15:21:37,865 INFO  datanode.DataNode 
(DataNode.java:updateBlock(1510)) - 
oldblock=blk_6077790398764684064_1001(length=1), 
newblock=blk_6077790398764684064_1002(length=0), datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,867 INFO  datanode.DataNode 
(DataNode.java:updateBlock(1510)) - 
oldblock=blk_6077790398764684064_1001(length=0), 
newblock=blk_6077790398764684064_1002(length=0), datanode=127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,868 INFO  namenode.FSNamesystem 
(FSNamesystem.java:commitBlockSynchronization(1613)) - 
commitBlockSynchronization(lastblock=blk_6077790398764684064_1001, 
newgenerationstamp=1002, newlength=0, newtargets=[127.0.0.1:38273, 
127.0.0.1:52882], closeFile=false, deleteBlock=false)
    [junit] 2009-08-10 15:21:37,868 INFO  namenode.FSNamesystem 
(FSNamesystem.java:commitBlockSynchronization(1677)) - 
commitBlockSynchronization(blk_6077790398764684064_1002) successful
    [junit] 2009-08-10 15:21:37,870 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,870 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,870 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_6077790398764684064_1002 src: /127.0.0.1:35703 dest: /127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,870 INFO  datanode.DataNode 
(FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append 
blk_6077790398764684064_1002
    [junit] 2009-08-10 15:21:37,871 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
 - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,871 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
 - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,871 INFO  datanode.DataNode 
(DataXceiver.java:opWriteBlock(222)) - Receiving block 
blk_6077790398764684064_1002 src: /127.0.0.1:38925 dest: /127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,871 INFO  datanode.DataNode 
(FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append 
blk_6077790398764684064_1002
    [junit] 2009-08-10 15:21:37,872 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60))
 - FI: statusRead SUCCESS, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,873 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
 - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,873 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
 - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,873 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
 - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,873 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
 - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,873 INFO  datanode.BlockReceiverAspects 
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
 - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,874 INFO  DataNode.clienttrace 
(BlockReceiver.java:lastDataNodeRun(819)) - src: /127.0.0.1:38925, dest: 
/127.0.0.1:52882, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1214791587, 
offset: 0, srvID: DS-593760917-67.195.138.9-52882-1249917696278, blockid: 
blk_6077790398764684064_1002, duration: 1809582
    [junit] 2009-08-10 15:21:37,875 INFO  hdfs.StateChange 
(BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: 
blockMap updated: 127.0.0.1:52882 is added to blk_6077790398764684064_1002 size 
1
    [junit] 2009-08-10 15:21:37,875 INFO  datanode.DataNode 
(BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block 
blk_6077790398764684064_1002 terminating
    [junit] 2009-08-10 15:21:37,876 INFO  DataNode.clienttrace 
(BlockReceiver.java:run(945)) - src: /127.0.0.1:35703, dest: /127.0.0.1:38273, 
bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1214791587, offset: 0, srvID: 
DS-124577835-67.195.138.9-38273-1249917697677, blockid: 
blk_6077790398764684064_1002, duration: 2971944
    [junit] 2009-08-10 15:21:37,876 INFO  hdfs.StateChange 
(BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: 
blockMap updated: 127.0.0.1:38273 is added to blk_6077790398764684064_1002 size 
1
    [junit] 2009-08-10 15:21:37,877 INFO  datanode.DataNode 
(BlockReceiver.java:run(1009)) - PacketResponder 1 for block 
blk_6077790398764684064_1002 terminating
    [junit] 2009-08-10 15:21:37,878 INFO  hdfs.StateChange 
(FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: 
file /testPipelineFi15/foo is closed by DFSClient_-1214791587
    [junit] 2009-08-10 15:21:37,889 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1   
cmd=open        src=/testPipelineFi15/foo       dst=null        perm=null
    [junit] 2009-08-10 15:21:37,890 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
 - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:38273
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-10 15:21:37,891 INFO  DataNode.clienttrace 
(BlockSender.java:sendBlock(417)) - src: /127.0.0.1:38273, dest: 
/127.0.0.1:35705, bytes: 5, op: HDFS_READ, cliID: DFSClient_-1214791587, 
offset: 0, srvID: DS-124577835-67.195.138.9-38273-1249917697677, blockid: 
blk_6077790398764684064_1002, duration: 233320
    [junit] 2009-08-10 15:21:37,891 INFO  datanode.DataTransferProtocolAspects 
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60))
 - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,993 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 35997
    [junit] 2009-08-10 15:21:37,993 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 0 on 35997: exiting
    [junit] 2009-08-10 15:21:37,993 INFO  ipc.Server (Server.java:run(539)) - 
Stopping IPC Server Responder
    [junit] 2009-08-10 15:21:37,994 WARN  datanode.DataNode 
(DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:38273, 
storageID=DS-124577835-67.195.138.9-38273-1249917697677, infoPort=47864, 
ipcPort=35997):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit]     at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit]     at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit]     at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-10 15:21:37,994 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 1
    [junit] 2009-08-10 15:21:37,993 INFO  ipc.Server (Server.java:run(352)) - 
Stopping IPC Server listener on 35997
    [junit] 2009-08-10 15:21:38,043 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-10 15:21:38,043 INFO  datanode.DataNode 
(DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:38273, 
storageID=DS-124577835-67.195.138.9-38273-1249917697677, infoPort=47864, 
ipcPort=35997):Finishing DataNode in: 
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
 
    [junit] 2009-08-10 15:21:38,043 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 35997
    [junit] 2009-08-10 15:21:38,044 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-10 15:21:38,146 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 34492
    [junit] 2009-08-10 15:21:38,146 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 0 on 34492: exiting
    [junit] 2009-08-10 15:21:38,146 INFO  ipc.Server (Server.java:run(352)) - 
Stopping IPC Server listener on 34492
    [junit] 2009-08-10 15:21:38,147 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 1
    [junit] 2009-08-10 15:21:38,147 INFO  ipc.Server (Server.java:run(539)) - 
Stopping IPC Server Responder
    [junit] 2009-08-10 15:21:38,147 WARN  datanode.DataNode 
(DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:49054, 
storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, 
ipcPort=34492):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit]     at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit]     at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit]     at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-10 15:21:38,149 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] 2009-08-10 15:21:38,150 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-10 15:21:38,150 INFO  datanode.DataNode 
(DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:49054, 
storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, 
ipcPort=34492):Finishing DataNode in: 
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
 
    [junit] 2009-08-10 15:21:38,150 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 34492
    [junit] 2009-08-10 15:21:38,150 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-10 15:21:38,252 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 41464
    [junit] 2009-08-10 15:21:38,253 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 0 on 41464: exiting
    [junit] 2009-08-10 15:21:38,253 INFO  ipc.Server (Server.java:run(352)) - 
Stopping IPC Server listener on 41464
    [junit] 2009-08-10 15:21:38,253 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 1
    [junit] 2009-08-10 15:21:38,253 INFO  ipc.Server (Server.java:run(539)) - 
Stopping IPC Server Responder
    [junit] 2009-08-10 15:21:38,253 WARN  datanode.DataNode 
(DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:52882, 
storageID=DS-593760917-67.195.138.9-52882-1249917696278, infoPort=35631, 
ipcPort=41464):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit]     at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit]     at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit]     at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit]     at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit]     at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-10 15:21:38,255 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] 2009-08-10 15:21:38,256 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-10 15:21:38,256 INFO  datanode.DataNode 
(DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:52882, 
storageID=DS-593760917-67.195.138.9-52882-1249917696278, infoPort=35631, 
ipcPort=41464):Finishing DataNode in: 
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}
 
    [junit] 2009-08-10 15:21:38,256 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 41464
    [junit] 2009-08-10 15:21:38,256 INFO  datanode.DataNode 
(DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads 
is 0
    [junit] 2009-08-10 15:21:38,395 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(67)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-10 15:21:38,395 INFO  namenode.FSNamesystem 
(FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time 
for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of 
syncs: 2 SyncTimes(ms): 39 29 
    [junit] 2009-08-10 15:21:38,395 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2077)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-10 15:21:38,403 INFO  ipc.Server (Server.java:stop(1103)) - 
Stopping server on 60444
    [junit] 2009-08-10 15:21:38,404 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 0 on 60444: exiting
    [junit] 2009-08-10 15:21:38,404 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 5 on 60444: exiting
    [junit] 2009-08-10 15:21:38,405 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 7 on 60444: exiting
    [junit] 2009-08-10 15:21:38,405 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 1 on 60444: exiting
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 44.091 sec
    [junit] 2009-08-10 15:21:38,404 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 4 on 60444: exiting
    [junit] 2009-08-10 15:21:38,404 INFO  ipc.Server (Server.java:run(352)) - 
Stopping IPC Server listener on 60444
    [junit] 2009-08-10 15:21:38,404 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 2 on 60444: exiting
    [junit] 2009-08-10 15:21:38,404 INFO  ipc.Server (Server.java:run(539)) - 
Stopping IPC Server Responder
    [junit] 2009-08-10 15:21:38,404 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 9 on 60444: exiting
    [junit] 2009-08-10 15:21:38,405 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 6 on 60444: exiting
    [junit] 2009-08-10 15:21:38,405 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 3 on 60444: exiting
    [junit] 2009-08-10 15:21:38,404 INFO  ipc.Server (Server.java:run(997)) - 
IPC Server handler 8 on 60444: exiting

checkfailure:

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml 
:725: Tests failed!

Total time: 56 minutes 23 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...

Reply via email to