See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/57/changes
Changes: [ddas] HADOOP-6103. Updated common jar from rev 806430 ------------------------------------------ [...truncated 226272 lines...] [junit] 2009-08-21 12:44:38,124 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 33129 webServer.getConnectors()[0].getLocalPort() returned 33129 [junit] 2009-08-21 12:44:38,124 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 33129 [junit] 2009-08-21 12:44:38,125 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14 [junit] 2009-08-21 12:44:38,221 INFO mortbay.log (?:invoke(?)) - Started selectchannelconnec...@localhost:33129 [junit] 2009-08-21 12:44:38,222 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized [junit] 2009-08-21 12:44:38,223 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=60219 [junit] 2009-08-21 12:44:38,224 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting [junit] 2009-08-21 12:44:38,224 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:40199, storageID=, infoPort=33129, ipcPort=60219) [junit] 2009-08-21 12:44:38,224 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 60219: starting [junit] 2009-08-21 12:44:38,224 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 60219: starting [junit] 2009-08-21 12:44:38,226 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:40199 storage DS-357635502-67.195.138.9-40199-1250858678225 [junit] 2009-08-21 12:44:38,226 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:40199 [junit] 2009-08-21 12:44:38,278 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-357635502-67.195.138.9-40199-1250858678225 is assigned to data-node 127.0.0.1:40199 [junit] 2009-08-21 12:44:38,278 INFO datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:40199, storageID=DS-357635502-67.195.138.9-40199-1250858678225, infoPort=33129, ipcPort=60219)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'} [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4 [junit] 2009-08-21 12:44:38,279 INFO datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec [junit] 2009-08-21 12:44:38,287 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3 is not formatted. [junit] 2009-08-21 12:44:38,288 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ... [junit] 2009-08-21 12:44:38,307 INFO datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 0 msecs [junit] 2009-08-21 12:44:38,308 INFO datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner. [junit] 2009-08-21 12:44:38,482 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data4 is not formatted. [junit] 2009-08-21 12:44:38,482 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ... [junit] 2009-08-21 12:44:38,757 INFO datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean [junit] 2009-08-21 12:44:38,757 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 51908 [junit] 2009-08-21 12:44:38,758 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s [junit] 2009-08-21 12:44:38,758 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1250869067758 with interval 21600000 [junit] 2009-08-21 12:44:38,759 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0 [junit] 2009-08-21 12:44:38,760 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 55747 webServer.getConnectors()[0].getLocalPort() returned 55747 [junit] 2009-08-21 12:44:38,760 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 55747 [junit] 2009-08-21 12:44:38,760 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14 [junit] 2009-08-21 12:44:38,828 INFO mortbay.log (?:invoke(?)) - Started selectchannelconnec...@localhost:55747 [junit] 2009-08-21 12:44:38,829 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized [junit] 2009-08-21 12:44:38,830 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=36021 [junit] 2009-08-21 12:44:38,830 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting [junit] 2009-08-21 12:44:38,831 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:51908, storageID=, infoPort=55747, ipcPort=36021) [junit] 2009-08-21 12:44:38,830 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 36021: starting [junit] 2009-08-21 12:44:38,830 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 36021: starting [junit] 2009-08-21 12:44:38,832 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:51908 storage DS-821477171-67.195.138.9-51908-1250858678832 [junit] 2009-08-21 12:44:38,833 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:51908 [junit] 2009-08-21 12:44:38,884 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-821477171-67.195.138.9-51908-1250858678832 is assigned to data-node 127.0.0.1:51908 [junit] 2009-08-21 12:44:38,884 INFO datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:51908, storageID=DS-821477171-67.195.138.9-51908-1250858678832, infoPort=55747, ipcPort=36021)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'} [junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6 [junit] 2009-08-21 12:44:38,885 INFO datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec [junit] 2009-08-21 12:44:38,894 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5 is not formatted. [junit] 2009-08-21 12:44:38,894 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ... [junit] 2009-08-21 12:44:38,921 INFO datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 0 msecs [junit] 2009-08-21 12:44:38,922 INFO datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner. [junit] 2009-08-21 12:44:39,094 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6 is not formatted. [junit] 2009-08-21 12:44:39,094 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ... [junit] 2009-08-21 12:44:39,355 INFO datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean [junit] 2009-08-21 12:44:39,355 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 46377 [junit] 2009-08-21 12:44:39,356 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s [junit] 2009-08-21 12:44:39,356 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1250869158356 with interval 21600000 [junit] 2009-08-21 12:44:39,357 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0 [junit] 2009-08-21 12:44:39,358 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 54942 webServer.getConnectors()[0].getLocalPort() returned 54942 [junit] 2009-08-21 12:44:39,358 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 54942 [junit] 2009-08-21 12:44:39,358 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14 [junit] 2009-08-21 12:44:39,430 INFO mortbay.log (?:invoke(?)) - Started selectchannelconnec...@localhost:54942 [junit] 2009-08-21 12:44:39,431 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized [junit] 2009-08-21 12:44:39,432 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=47190 [junit] 2009-08-21 12:44:39,432 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting [junit] 2009-08-21 12:44:39,433 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:46377, storageID=, infoPort=54942, ipcPort=47190) [junit] 2009-08-21 12:44:39,433 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 47190: starting [junit] 2009-08-21 12:44:39,433 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 47190: starting [junit] 2009-08-21 12:44:39,435 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:46377 storage DS-1055442649-67.195.138.9-46377-1250858679434 [junit] 2009-08-21 12:44:39,435 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:46377 [junit] 2009-08-21 12:44:39,478 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-1055442649-67.195.138.9-46377-1250858679434 is assigned to data-node 127.0.0.1:46377 [junit] 2009-08-21 12:44:39,479 INFO datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:46377, storageID=DS-1055442649-67.195.138.9-46377-1250858679434, infoPort=54942, ipcPort=47190)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} [junit] 2009-08-21 12:44:39,479 INFO datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec [junit] 2009-08-21 12:44:39,552 INFO datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 msecs [junit] 2009-08-21 12:44:39,553 INFO datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner. [junit] 2009-08-21 12:44:39,700 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/pipeline_Fi_16/foo dst=null perm=hudson:supergroup:rw-r--r-- [junit] 2009-08-21 12:44:39,736 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /pipeline_Fi_16/foo. blk_1975120595995065171_1001 [junit] 2009-08-21 12:44:39,737 INFO protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(35)) - FI: addBlock Pipeline[127.0.0.1:46377, 127.0.0.1:40199, 127.0.0.1:51908] [junit] 2009-08-21 12:44:39,738 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:46377 [junit] 2009-08-21 12:44:39,738 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock [junit] 2009-08-21 12:44:39,739 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_1975120595995065171_1001 src: /127.0.0.1:36641 dest: /127.0.0.1:46377 [junit] 2009-08-21 12:44:39,740 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:40199 [junit] 2009-08-21 12:44:39,741 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock [junit] 2009-08-21 12:44:39,741 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_1975120595995065171_1001 src: /127.0.0.1:43022 dest: /127.0.0.1:40199 [junit] 2009-08-21 12:44:39,743 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:51908 [junit] 2009-08-21 12:44:39,743 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock [junit] 2009-08-21 12:44:39,743 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_1975120595995065171_1001 src: /127.0.0.1:51401 dest: /127.0.0.1:51908 [junit] 2009-08-21 12:44:39,744 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:40199 [junit] 2009-08-21 12:44:39,745 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:46377 [junit] 2009-08-21 12:44:39,746 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,746 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,746 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,747 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,746 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,747 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,747 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,747 INFO fi.FiTestUtil (DataTransferTestUtil.java:run(151)) - FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:51908 [junit] 2009-08-21 12:44:39,748 WARN datanode.DataNode (DataNode.java:checkDiskError(702)) - checkDiskError: exception: [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:51908 [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152) [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1) [junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66) [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50) [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459) [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339) [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324) [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110) [junit] at java.lang.Thread.run(Thread.java:619) [junit] 2009-08-21 12:44:39,748 INFO mortbay.log (?:invoke(?)) - Completed FSVolumeSet.checkDirs. Removed=0volumes. List of current volumes: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current [junit] 2009-08-21 12:44:39,749 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(569)) - Exception in receiveBlock for block blk_1975120595995065171_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:51908 [junit] 2009-08-21 12:44:39,749 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(782)) - PacketResponder 0 for block blk_1975120595995065171_1001 Interrupted. [junit] 2009-08-21 12:44:39,749 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_1975120595995065171_1001 terminating [junit] 2009-08-21 12:44:39,749 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_1975120595995065171_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:51908 [junit] 2009-08-21 12:44:39,749 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:51908, storageID=DS-821477171-67.195.138.9-51908-1250858678832, infoPort=55747, ipcPort=36021):DataXceiver [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:51908 [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152) [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1) [junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66) [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50) [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459) [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339) [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324) [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110) [junit] at java.lang.Thread.run(Thread.java:619) [junit] 2009-08-21 12:44:39,749 INFO datanode.DataNode (BlockReceiver.java:run(917)) - PacketResponder blk_1975120595995065171_1001 1 Exception java.io.EOFException [junit] at java.io.DataInputStream.readFully(DataInputStream.java:180) [junit] at java.io.DataInputStream.readLong(DataInputStream.java:399) [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:879) [junit] at java.lang.Thread.run(Thread.java:619) [junit] [junit] 2009-08-21 12:44:39,751 INFO datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_1975120595995065171_1001 terminating [junit] 2009-08-21 12:44:39,751 INFO datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 2 for block blk_1975120595995065171_1001 terminating [junit] 2009-08-21 12:44:39,752 WARN hdfs.DFSClient (DFSClient.java:run(2601)) - DFSOutputStream ResponseProcessor exception for block blk_1975120595995065171_1001java.io.IOException: Bad response ERROR for block blk_1975120595995065171_1001 from datanode 127.0.0.1:51908 [junit] at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2581) [junit] [junit] 2009-08-21 12:44:39,752 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2630)) - Error Recovery for block blk_1975120595995065171_1001 bad datanode[2] 127.0.0.1:51908 [junit] 2009-08-21 12:44:39,752 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2674)) - Error Recovery for block blk_1975120595995065171_1001 in pipeline 127.0.0.1:46377, 127.0.0.1:40199, 127.0.0.1:51908: bad datanode 127.0.0.1:51908 [junit] 2009-08-21 12:44:39,755 INFO datanode.DataNode (DataNode.java:logRecoverBlock(1727)) - Client calls recoverBlock(block=blk_1975120595995065171_1001, targets=[127.0.0.1:46377, 127.0.0.1:40199]) [junit] 2009-08-21 12:44:39,758 INFO datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_1975120595995065171_1001(length=1), newblock=blk_1975120595995065171_1002(length=1), datanode=127.0.0.1:46377 [junit] 2009-08-21 12:44:39,759 INFO datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_1975120595995065171_1001(length=1), newblock=blk_1975120595995065171_1002(length=1), datanode=127.0.0.1:40199 [junit] 2009-08-21 12:44:39,760 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_1975120595995065171_1001, newgenerationstamp=1002, newlength=1, newtargets=[127.0.0.1:46377, 127.0.0.1:40199], closeFile=false, deleteBlock=false) [junit] 2009-08-21 12:44:39,760 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_1975120595995065171_1002) successful [junit] 2009-08-21 12:44:39,761 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:46377 [junit] 2009-08-21 12:44:39,762 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock [junit] 2009-08-21 12:44:39,762 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_1975120595995065171_1002 src: /127.0.0.1:36646 dest: /127.0.0.1:46377 [junit] 2009-08-21 12:44:39,762 INFO datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_1975120595995065171_1002 [junit] 2009-08-21 12:44:39,763 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:40199 [junit] 2009-08-21 12:44:39,763 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock [junit] 2009-08-21 12:44:39,763 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_1975120595995065171_1002 src: /127.0.0.1:43027 dest: /127.0.0.1:40199 [junit] 2009-08-21 12:44:39,763 INFO datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_1975120595995065171_1002 [junit] 2009-08-21 12:44:39,764 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:46377 [junit] 2009-08-21 12:44:39,764 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,765 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,765 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,765 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,765 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket [junit] 2009-08-21 12:44:39,766 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(822)) - src: /127.0.0.1:43027, dest: /127.0.0.1:40199, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-113320577, offset: 0, srvID: DS-357635502-67.195.138.9-40199-1250858678225, blockid: blk_1975120595995065171_1002, duration: 1945491 [junit] 2009-08-21 12:44:39,766 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_1975120595995065171_1002 terminating [junit] 2009-08-21 12:44:39,767 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:40199 is added to blk_1975120595995065171_1002 size 1 [junit] 2009-08-21 12:44:39,767 INFO DataNode.clienttrace (BlockReceiver.java:run(955)) - src: /127.0.0.1:36646, dest: /127.0.0.1:46377, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-113320577, offset: 0, srvID: DS-1055442649-67.195.138.9-46377-1250858679434, blockid: blk_1975120595995065171_1002, duration: 2597834 [junit] 2009-08-21 12:44:39,767 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:46377 is added to blk_1975120595995065171_1002 size 1 [junit] 2009-08-21 12:44:39,768 INFO datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_1975120595995065171_1002 terminating [junit] 2009-08-21 12:44:39,769 INFO hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /pipeline_Fi_16/foo is closed by DFSClient_-113320577 [junit] 2009-08-21 12:44:39,782 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=open src=/pipeline_Fi_16/foo dst=null perm=null [junit] 2009-08-21 12:44:39,783 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:40199 [junit] Shutting down the Mini HDFS Cluster [junit] Shutting down DataNode 2 [junit] 2009-08-21 12:44:39,784 INFO DataNode.clienttrace (BlockSender.java:sendBlock(417)) - src: /127.0.0.1:40199, dest: /127.0.0.1:43028, bytes: 5, op: HDFS_READ, cliID: DFSClient_-113320577, offset: 0, srvID: DS-357635502-67.195.138.9-40199-1250858678225, blockid: blk_1975120595995065171_1002, duration: 247202 [junit] 2009-08-21 12:44:39,785 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:40199 [junit] 2009-08-21 12:44:39,886 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 47190 [junit] 2009-08-21 12:44:39,886 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 47190: exiting [junit] 2009-08-21 12:44:39,886 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 47190 [junit] 2009-08-21 12:44:39,886 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder [junit] 2009-08-21 12:44:39,887 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:46377, storageID=DS-1055442649-67.195.138.9-46377-1250858679434, infoPort=54942, ipcPort=47190):DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) [junit] at java.lang.Thread.run(Thread.java:619) [junit] [junit] 2009-08-21 12:44:39,886 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2009-08-21 12:44:39,887 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread. [junit] 2009-08-21 12:44:39,888 INFO datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:46377, storageID=DS-1055442649-67.195.138.9-46377-1250858679434, infoPort=54942, ipcPort=47190):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} [junit] 2009-08-21 12:44:39,888 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 47190 [junit] 2009-08-21 12:44:39,888 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0 [junit] Shutting down DataNode 1 [junit] 2009-08-21 12:44:40,012 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 36021 [junit] 2009-08-21 12:44:40,013 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 36021 [junit] 2009-08-21 12:44:40,013 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2009-08-21 12:44:40,013 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 36021: exiting [junit] 2009-08-21 12:44:40,013 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:51908, storageID=DS-821477171-67.195.138.9-51908-1250858678832, infoPort=55747, ipcPort=36021):DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) [junit] at java.lang.Thread.run(Thread.java:619) [junit] [junit] 2009-08-21 12:44:40,013 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder [junit] 2009-08-21 12:44:40,015 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2009-08-21 12:44:40,016 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread. [junit] 2009-08-21 12:44:40,016 INFO datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:51908, storageID=DS-821477171-67.195.138.9-51908-1250858678832, infoPort=55747, ipcPort=36021):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'} [junit] 2009-08-21 12:44:40,016 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 36021 [junit] 2009-08-21 12:44:40,016 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0 [junit] Shutting down DataNode 0 [junit] 2009-08-21 12:44:40,118 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 60219 [junit] 2009-08-21 12:44:40,119 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 60219 [junit] 2009-08-21 12:44:40,119 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder [junit] 2009-08-21 12:44:40,119 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2009-08-21 12:44:40,119 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:40199, storageID=DS-357635502-67.195.138.9-40199-1250858678225, infoPort=33129, ipcPort=60219):DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) [junit] at java.lang.Thread.run(Thread.java:619) [junit] [junit] 2009-08-21 12:44:40,120 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 60219: exiting [junit] 2009-08-21 12:44:40,122 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2009-08-21 12:44:40,123 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread. [junit] 2009-08-21 12:44:40,123 INFO datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:40199, storageID=DS-357635502-67.195.138.9-40199-1250858678225, infoPort=33129, ipcPort=60219):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'} [junit] 2009-08-21 12:44:40,124 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 60219 [junit] 2009-08-21 12:44:40,124 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2009-08-21 12:44:40,264 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted [junit] 2009-08-21 12:44:40,264 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 50 35 [junit] 2009-08-21 12:44:40,264 WARN namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted [junit] 2009-08-21 12:44:40,271 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 42222 [junit] 2009-08-21 12:44:40,272 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 42222: exiting [junit] 2009-08-21 12:44:40,272 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 42222: exiting [junit] 2009-08-21 12:44:40,272 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 42222: exiting [junit] 2009-08-21 12:44:40,272 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 42222: exiting [junit] 2009-08-21 12:44:40,272 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 42222: exiting [junit] 2009-08-21 12:44:40,272 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 42222: exiting [junit] 2009-08-21 12:44:40,272 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 42222 [junit] 2009-08-21 12:44:40,272 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 42222: exiting [junit] 2009-08-21 12:44:40,273 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 42222: exiting [junit] 2009-08-21 12:44:40,273 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 42222: exiting [junit] 2009-08-21 12:44:40,273 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 42222: exiting [junit] 2009-08-21 12:44:40,274 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 74.272 sec checkfailure: BUILD FAILED http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :725: Tests failed! Total time: 70 minutes 37 seconds Publishing Javadoc Recording test results Recording fingerprints Publishing Clover coverage report...