See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/662/
################################################################################### ########################## LAST 60 LINES OF THE CONSOLE ########################### [...truncated 814531 lines...] [junit] [junit] 2011-05-10 12:36:09,651 INFO datanode.DataNode (DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-05-10 12:36:09,651 WARN datanode.DataNode (DataNode.java:offerService(1065)) - BPOfferService for block pool=BP-1962236892-127.0.1.1-1305030968545 received exception:java.lang.InterruptedException [junit] 2011-05-10 12:36:09,651 WARN datanode.DataNode (DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:51573, storageID=DS-907595183-127.0.1.1-51573-1305030969105, infoPort=48491, ipcPort=56648, storageInfo=lv=-35;cid=testClusterID;nsid=1972712457;c=0) ending block pool service for: BP-1962236892-127.0.1.1-1305030968545 [junit] 2011-05-10 12:36:09,651 INFO datanode.DataBlockScanner (DataBlockScanner.java:removeBlockPool(277)) - Removed bpid=BP-1962236892-127.0.1.1-1305030968545 from blockPoolScannerMap [junit] 2011-05-10 12:36:09,652 INFO datanode.DataNode (FSDataset.java:shutdownBlockPool(2560)) - Removing block pool BP-1962236892-127.0.1.1-1305030968545 [junit] 2011-05-10 12:36:09,652 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-05-10 12:36:09,652 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-05-10 12:36:09,652 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1041)) - Shutting down DataNode 0 [junit] 2011-05-10 12:36:09,652 WARN datanode.DirectoryScanner (DirectoryScanner.java:shutdown(297)) - DirectoryScanner: shutdown has been called [junit] 2011-05-10 12:36:09,653 INFO datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:startNewPeriod(591)) - Starting a new period : work left in prev period : 100.00% [junit] 2011-05-10 12:36:09,753 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 53384 [junit] 2011-05-10 12:36:09,754 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 53384: exiting [junit] 2011-05-10 12:36:09,754 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 53384 [junit] 2011-05-10 12:36:09,754 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] 2011-05-10 12:36:09,754 WARN datanode.DataNode (DataXceiverServer.java:run(143)) - 127.0.0.1:52202:DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:136) [junit] at java.lang.Thread.run(Thread.java:662) [junit] [junit] 2011-05-10 12:36:09,755 INFO datanode.DataNode (DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-05-10 12:36:09,755 WARN datanode.DataNode (DataNode.java:offerService(1065)) - BPOfferService for block pool=BP-1962236892-127.0.1.1-1305030968545 received exception:java.lang.InterruptedException [junit] 2011-05-10 12:36:09,755 WARN datanode.DataNode (DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:52202, storageID=DS-431346803-127.0.1.1-52202-1305030968976, infoPort=39361, ipcPort=53384, storageInfo=lv=-35;cid=testClusterID;nsid=1972712457;c=0) ending block pool service for: BP-1962236892-127.0.1.1-1305030968545 [junit] 2011-05-10 12:36:09,855 INFO datanode.DataBlockScanner (DataBlockScanner.java:removeBlockPool(277)) - Removed bpid=BP-1962236892-127.0.1.1-1305030968545 from blockPoolScannerMap [junit] 2011-05-10 12:36:09,855 INFO datanode.DataNode (FSDataset.java:shutdownBlockPool(2560)) - Removing block pool BP-1962236892-127.0.1.1-1305030968545 [junit] 2011-05-10 12:36:09,855 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-05-10 12:36:09,856 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-05-10 12:36:09,957 WARN namenode.FSNamesystem (FSNamesystem.java:run(3010)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted [junit] 2011-05-10 12:36:09,957 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted [junit] 2011-05-10 12:36:09,957 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 16 12 [junit] 2011-05-10 12:36:09,960 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 59295 [junit] 2011-05-10 12:36:09,960 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 59295: exiting [junit] 2011-05-10 12:36:09,964 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 59295 [junit] 2011-05-10 12:36:09,964 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 102.168 sec checkfailure: [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:748: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:689: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:663: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: Tests failed! Total time: 62 minutes 25 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Publishing Javadoc Archiving artifacts Recording test results Recording fingerprints Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ################################################################################### ############################## FAILED TESTS (if any) ############################## 3 tests failed. REGRESSION: org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_18 Error Message: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:39402], original=[127.0.0.1:39402] Stack Trace: java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:39402], original=[127.0.0.1:39402] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:768) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:824) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:918) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:731) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:415) REGRESSION: org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_29 Error Message: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:59264], original=[127.0.0.1:59264] Stack Trace: java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:59264], original=[127.0.0.1:59264] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:768) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:824) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:918) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:731) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:415) FAILED: org.apache.hadoop.hdfs.TestDFSShell.testErrOutPut Error Message: -cat returned -1 Stack Trace: junit.framework.AssertionFailedError: -cat returned -1 at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2yogg6zx8o(TestDFSShell.java:324) at org.apache.hadoop.hdfs.TestDFSShell.testErrOutPut(TestDFSShell.java:308)