[jira] Created: (HDFS-1549) ArrayIndexOutOfBoundsException throwed from BlockLocation

2010-12-21 Thread Min Zhou (JIRA)
ArrayIndexOutOfBoundsException throwed from BlockLocation 
--

 Key: HDFS-1549
 URL: https://issues.apache.org/jira/browse/HDFS-1549
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Min Zhou
 Fix For: 0.22.0


BlockLocation object created through the default constructor  has a hosts array 
with its length of zero.  It will apparently throw an 
ArrayIndexOutOfBoundsException when reading fields from DataInput if not 
resized the array length.

Exception in thread "IPC Client (47) connection to nn151/192.168.201.151:9020 
from zhoumin" java.lang.ArrayIndexOutOfBoundsException: 0
at org.apache.hadoop.fs.BlockLocation.readFields(BlockLocation.java:177)
at 
org.apache.hadoop.fs.LocatedFileStatus.readFields(LocatedFileStatus.java:85)
at 
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:237)
at 
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:171)
at 
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:219)
at 
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
at 
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:509)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:439)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Hadoop-Hdfs-21-Build - Build # 137 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-21-Build/137/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3011 lines...]
[junit]  
jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.fs.TestFiListPath
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.867 sec
[junit] Running org.apache.hadoop.fs.TestFiRename
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 5.026 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHFlush
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 16.333 sec
[junit] Running org.apache.hadoop.hdfs.TestFiPipelines
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.649 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
[junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 217.668 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 429.203 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.797 sec

checkfailure:

run-test-hdfs-excluding-commit-and-smoke:
   [delete] Deleting directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build-fi/test/data
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build-fi/test/data
   [delete] Deleting directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build-fi/test/logs
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build-fi/test/logs
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.fs.TestFiListPath
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.956 sec
[junit] Running org.apache.hadoop.fs.TestFiRename
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 5.15 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHFlush
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 16.008 sec
[junit] Running org.apache.hadoop.hdfs.TestFiPipelines
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.412 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
[junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 217.321 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 412.547 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.869 sec

checkfailure:

run-test-hdfs-all-withtestcaseonly:

run-test-hdfs:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build.xml:707: 
Tests failed!

Total time: 88 minutes 22 seconds
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Description set: 
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Was waiting too long for a replica to become TEMPORARY

Stack Trace:
junit.framework.AssertionFailedError: Was waiting too long for a replica to 
become TEMPORARY
at 
org.apache.hadoop.hdfs.server.datanode.TestBlockReport.waitForTempReplica(TestBlockReport.java:483)
at 
org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:430)





Hadoop-Hdfs-22-branch - Build # 4 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-22-branch/4/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3367 lines...]
[junit] Running org.apache.hadoop.hdfs.TestFiHFlush
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 18.286 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHftp
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 37.164 sec
[junit] Running org.apache.hadoop.hdfs.TestFiPipelines
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.745 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
[junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 219.553 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 420.436 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 36.167 sec

checkfailure:

run-test-hdfs-excluding-commit-and-smoke:
   [delete] Deleting directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
   [delete] Deleting directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.fs.TestFiListPath
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.898 sec
[junit] Running org.apache.hadoop.fs.TestFiRename
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 5.862 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHFlush
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 19.342 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHftp
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 42.302 sec
[junit] Running org.apache.hadoop.hdfs.TestFiPipelines
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.112 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
[junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 220.365 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 404.397 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 36.358 sec

checkfailure:

run-test-hdfs-all-withtestcaseonly:

run-test-hdfs:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:725:
 Tests failed!

Total time: 103 minutes 47 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer

Error Message:
Too many open files

Stack Trace:
java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.initPipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:49)
at 
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at java.nio.channels.Selector.open(Selector.java:209)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:318)
at org.apache.hadoop.ipc.Server.(Server.java:1492)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:394)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Server.(WritableRpcEngine.java:331)
at 
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:291)
at 
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:47)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:382)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:416)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:507)
at 
org.apache.hadoop.hdfs

Hadoop-Hdfs-trunk - Build # 528 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/528/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 841373 lines...]
[junit] 2010-12-21 13:23:19,658 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-21 13:23:19,658 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 13:23:19,760 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 35416
[junit] 2010-12-21 13:23:19,760 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 35416: exiting
[junit] 2010-12-21 13:23:19,760 INFO  ipc.Server (Server.java:run(675)) - 
Stopping IPC Server Responder
[junit] 2010-12-21 13:23:19,760 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 1
[junit] 2010-12-21 13:23:19,760 INFO  ipc.Server (Server.java:run(475)) - 
Stopping IPC Server listener on 35416
[junit] 2010-12-21 13:23:19,760 WARN  datanode.DataNode 
(DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:50831, 
storageID=DS-1946037341-127.0.1.1-50831-1292937788838, infoPort=36930, 
ipcPort=35416):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 
[junit] 2010-12-21 13:23:19,763 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-21 13:23:19,863 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 13:23:19,863 INFO  datanode.DataNode 
(DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:50831, 
storageID=DS-1946037341-127.0.1.1-50831-1292937788838, infoPort=36930, 
ipcPort=35416):Finishing DataNode in: 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 13:23:19,864 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 35416
[junit] 2010-12-21 13:23:19,864 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-21 13:23:19,864 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2010-12-21 13:23:19,864 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2010-12-21 13:23:19,865 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-21 13:23:19,969 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 13:23:19,969 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(631)) - Number of transactions: 6 Total time 
for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of 
syncs: 3 SyncTimes(ms): 7 3 
[junit] 2010-12-21 13:23:19,969 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2822)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 13:23:19,970 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 34872
[junit] 2010-12-21 13:23:19,971 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 34872: exiting
[junit] 2010-12-21 13:23:19,971 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 6 on 34872: exiting
[junit] 2010-12-21 13:23:19,971 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 7 on 34872: exiting
[junit] 2010-12-21 13:23:19,971 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 2 on 34872: exiting
[junit] 2010-12-21 13:23:19,971 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 9 on 34872: exiting
[junit] 2010-12-21 13:23:19,971 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 5 on 34872: exiting
[junit] 2010-12-21 13:23:19,971 INFO  ipc.Server (Server.java:run(1

Hadoop-Hdfs-trunk-Commit - Build # 496 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/496/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 143613 lines...]
[junit] 2010-12-21 19:03:38,419 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-21 19:03:38,419 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 19:03:38,520 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 38397
[junit] 2010-12-21 19:03:38,521 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 38397: exiting
[junit] 2010-12-21 19:03:38,521 INFO  ipc.Server (Server.java:run(475)) - 
Stopping IPC Server listener on 38397
[junit] 2010-12-21 19:03:38,521 INFO  ipc.Server (Server.java:run(675)) - 
Stopping IPC Server Responder
[junit] 2010-12-21 19:03:38,521 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 1 on 38397: exiting
[junit] 2010-12-21 19:03:38,521 WARN  datanode.DataNode 
(DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:34688, 
storageID=DS-629189686-127.0.1.1-34688-1292958217657, infoPort=58302, 
ipcPort=38397):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 
[junit] 2010-12-21 19:03:38,521 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 1
[junit] 2010-12-21 19:03:38,521 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 2 on 38397: exiting
[junit] 2010-12-21 19:03:38,522 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 19:03:38,522 INFO  datanode.DataNode 
(DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:34688, 
storageID=DS-629189686-127.0.1.1-34688-1292958217657, infoPort=58302, 
ipcPort=38397):Finishing DataNode in: 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 19:03:38,522 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 38397
[junit] 2010-12-21 19:03:38,522 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-21 19:03:38,523 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2010-12-21 19:03:38,523 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2010-12-21 19:03:38,523 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-21 19:03:38,625 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2822)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:03:38,625 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:03:38,625 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time 
for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of 
syncs: 9 SyncTimes(ms): 10 6 
[junit] 2010-12-21 19:03:38,626 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 42152
[junit] 2010-12-21 19:03:38,627 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 2 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 8 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 9 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO  ipc.Server (Server.java:run(475)) - 
Stopping IPC Server listener on 42152
[junit] 2010-12-21 19:03:38,627 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Se

[jira] Created: (HDFS-1550) NPE when listing a file with no location

2010-12-21 Thread Hairong Kuang (JIRA)
NPE when listing a file with no location


 Key: HDFS-1550
 URL: https://issues.apache.org/jira/browse/HDFS-1550
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.22.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
Priority: Blocker
 Fix For: 0.22.0


Lines listed below will caused a NullPointerException in 
DFSUtil.locatedBlocks2Locations (line 208) because EMPTY_BLOCK_LOCS  will 
return null when calling blocks.getLocatedBlocks()
{noformat}
   /** a default LocatedBlocks object, its content should not be changed */
   private final static LocatedBlocks EMPTY_BLOCK_LOCS = new LocatedBlocks();
{noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Hadoop-Hdfs-trunk-Commit - Build # 497 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/497/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 139268 lines...]
[junit] 2010-12-21 19:25:21,468 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-21 19:25:21,468 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 19:25:21,569 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 38446
[junit] 2010-12-21 19:25:21,570 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 1 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 2 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO  ipc.Server (Server.java:run(475)) - 
Stopping IPC Server listener on 38446
[junit] 2010-12-21 19:25:21,571 INFO  ipc.Server (Server.java:run(675)) - 
Stopping IPC Server Responder
[junit] 2010-12-21 19:25:21,571 WARN  datanode.DataNode 
(DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:36691, 
storageID=DS-1364120862-127.0.1.1-36691-1292959520595, infoPort=59825, 
ipcPort=38446):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 
[junit] 2010-12-21 19:25:21,571 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-21 19:25:21,572 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 19:25:21,572 INFO  datanode.DataNode 
(DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:36691, 
storageID=DS-1364120862-127.0.1.1-36691-1292959520595, infoPort=59825, 
ipcPort=38446):Finishing DataNode in: 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 19:25:21,572 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 38446
[junit] 2010-12-21 19:25:21,572 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-21 19:25:21,572 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2010-12-21 19:25:21,573 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2010-12-21 19:25:21,573 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-21 19:25:21,674 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2822)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:25:21,674 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:25:21,675 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time 
for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of 
syncs: 9 SyncTimes(ms): 11 3 
[junit] 2010-12-21 19:25:21,676 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 46353
[junit] 2010-12-21 19:25:21,677 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 1 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 2 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 7 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO  ipc.Server (Server.java:run(475)) - 
Stopping IPC Server listener on 46353
[junit] 2010-12-21 19:25:21,677 INFO  ipc.Server (Server.java:run(675)) - 
Stopp

[jira] Created: (HDFS-1551) fix the pom template's version

2010-12-21 Thread Giridharan Kesavan (JIRA)
fix the pom template's version
--

 Key: HDFS-1551
 URL: https://issues.apache.org/jira/browse/HDFS-1551
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


pom templates in the ivy folder should be updated to the latest version 
hadoo-common dependencies.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-1551) fix the pom template's version

2010-12-21 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HDFS-1551.
--

   Resolution: Fixed
Fix Version/s: 0.23.0

Thanks Nigel.

> fix the pom template's version
> --
>
> Key: HDFS-1551
> URL: https://issues.apache.org/jira/browse/HDFS-1551
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Fix For: 0.23.0
>
> Attachments: hdfs-1551.patch
>
>
> pom templates in the ivy folder should be updated to the latest version 
> hadoo-common dependencies.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Hadoop-Hdfs-trunk-Commit - Build # 498 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/498/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 145407 lines...]
[junit] 2010-12-21 21:02:49,548 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 21:02:49,658 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 47329
[junit] 2010-12-21 21:02:49,658 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 47329: exiting
[junit] 2010-12-21 21:02:49,658 INFO  ipc.Server (Server.java:run(475)) - 
Stopping IPC Server listener on 47329
[junit] 2010-12-21 21:02:49,659 INFO  ipc.Server (Server.java:run(675)) - 
Stopping IPC Server Responder
[junit] 2010-12-21 21:02:49,659 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 2 on 47329: exiting
[junit] 2010-12-21 21:02:49,658 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 1
[junit] 2010-12-21 21:02:49,659 WARN  datanode.DataNode 
(DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:45847, 
storageID=DS-1587999582-127.0.1.1-45847-1292965368602, infoPort=45437, 
ipcPort=47329):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 
[junit] 2010-12-21 21:02:49,659 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 1 on 47329: exiting
[junit] 2010-12-21 21:02:49,661 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-21 21:02:49,662 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 21:02:49,662 INFO  datanode.DataNode 
(DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:45847, 
storageID=DS-1587999582-127.0.1.1-45847-1292965368602, infoPort=45437, 
ipcPort=47329):Finishing DataNode in: 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 21:02:49,662 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 47329
[junit] 2010-12-21 21:02:49,663 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-21 21:02:49,663 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2010-12-21 21:02:49,663 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2010-12-21 21:02:49,663 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-21 21:02:49,765 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2822)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 21:02:49,765 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 21:02:49,765 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time 
for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of 
syncs: 9 SyncTimes(ms): 7 5 
[junit] 2010-12-21 21:02:49,766 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 33168
[junit] 2010-12-21 21:02:49,767 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 2 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO  ipc.Server (Server.java:run(675)) - 
Stopping IPC Server Responder
[junit] 2010-12-21 21:02:49,767 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 3 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 4 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 5 on 33168: exi

Hadoop-Hdfs-21-Build - Build # 138 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-21-Build/138/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 909 lines...]
ivy-probe-antlib:

ivy-init-antlib:

ivy-init:
[ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/ivy/ivysettings.xml

ivy-resolve-common:
[ivy:resolve] downloading 
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0-SNAPSHOT/hadoop-common-0.21.0-20101120.093342-38.jar
 ...
[ivy:resolve] 
..
 (1259kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-common;0.21.0-SNAPSHOT!hadoop-common.jar (628ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/avro/1.3.2/avro-1.3.2.jar ...
[ivy:resolve] 

 (331kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] org.apache.hadoop#avro;1.3.2!avro.jar (1493ms)

ivy-retrieve-common:
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 
'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/ivy/ivysettings.xml

init:
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/classes
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/src
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/webapps/hdfs/WEB-INF
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/webapps/datanode/WEB-INF
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/webapps/secondary/WEB-INF
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/ant
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/c++
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/test
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/test/hdfs/classes
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/test/extraconf
[touch] Creating /tmp/null1033961134
   [delete] Deleting: /tmp/null1033961134
 [copy] Copying 2 files to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/webapps
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/conf
 [copy] Copying 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/conf/hdfs-site.xml.template
 to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/conf/hdfs-site.xml

compile-hdfs-classes:
[javac] Compiling 197 source files to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/classes
[javac] 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:32:
 cannot access org.apache.hadoop.conf.Configuration
[javac] bad class file: 
org/apache/hadoop/conf/Configuration.class(org/apache/hadoop/conf:Configuration.class)
[javac] unable to access file: 
/homes/hudson/.ivy2/cache/org.apache.hadoop/hadoop-common/jars/hadoop-common-0.21.0-SNAPSHOT.jar
 (No such file or directory)
[javac] Please remove or make sure it appears in the correct subdirectory 
of the classpath.
[javac] import org.apache.hadoop.conf.Configuration;
[javac]  ^

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build.xml:320: 
Compile failed; see the compiler error output for details.

Total time: 12 seconds
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Description set: 
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


Hadoop-Hdfs-trunk-Commit - Build # 499 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/499/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 1473 lines...]
[javac] 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java:33:
 package InterfaceStability does not exist
[javac] @InterfaceStability.Evolving
[javac]^
[javac] 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:147:
 cannot find symbol
[javac] symbol  : class Path
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac]   HdfsLocatedFileStatus f, Path parent) {
[javac]^
[javac] 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:146:
 cannot find symbol
[javac] symbol  : class LocatedFileStatus
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac]   private LocatedFileStatus makeQualifiedLocated(
[javac]   ^
[javac] 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:159:
 cannot find symbol
[javac] symbol  : class FsStatus
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac]   public FsStatus getFsStatus() throws IOException {
[javac]  ^
[javac] 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:164:
 cannot find symbol
[javac] symbol  : class FsServerDefaults
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac]   public FsServerDefaults getServerDefaults() throws IOException {
[javac]  ^
[javac] 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:170:
 cannot find symbol
[javac] symbol  : class Path
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac]   final Path p)
[javac] ^
[javac] Note: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
 uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 100 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:335:
 Compile failed; see the compiler error output for details.

Total time: 13 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


Hadoop-Hdfs-trunk-Commit - Build # 500 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/500/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 148064 lines...]
[junit] 2010-12-22 04:48:21,922 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-22 04:48:21,922 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-22 04:48:22,025 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 37929
[junit] 2010-12-22 04:48:22,026 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 37929: exiting
[junit] 2010-12-22 04:48:22,026 WARN  datanode.DataNode 
(DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:41067, 
storageID=DS-911990291-127.0.1.1-41067-1292993301009, infoPort=33105, 
ipcPort=37929):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 
[junit] 2010-12-22 04:48:22,026 INFO  ipc.Server (Server.java:run(675)) - 
Stopping IPC Server Responder
[junit] 2010-12-22 04:48:22,026 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 1 on 37929: exiting
[junit] 2010-12-22 04:48:22,026 INFO  ipc.Server (Server.java:run(475)) - 
Stopping IPC Server listener on 37929
[junit] 2010-12-22 04:48:22,026 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-22 04:48:22,026 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 2 on 37929: exiting
[junit] 2010-12-22 04:48:22,027 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-22 04:48:22,027 INFO  datanode.DataNode 
(DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:41067, 
storageID=DS-911990291-127.0.1.1-41067-1292993301009, infoPort=33105, 
ipcPort=37929):Finishing DataNode in: 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-22 04:48:22,027 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 37929
[junit] 2010-12-22 04:48:22,027 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-22 04:48:22,028 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2010-12-22 04:48:22,028 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2010-12-22 04:48:22,028 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-22 04:48:22,130 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-22 04:48:22,130 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time 
for transactions(ms): 2Number of transactions batched in Syncs: 1 Number of 
syncs: 9 SyncTimes(ms): 6 7 
[junit] 2010-12-22 04:48:22,130 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2822)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-22 04:48:22,131 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 40184
[junit] 2010-12-22 04:48:22,132 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 1 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO  ipc.Server (Server.java:run(475)) - 
Stopping IPC Server listener on 40184
[junit] 2010-12-22 04:48:22,132 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 9 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 8 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Ser

Hadoop-Hdfs-22-branch - Build # 5 - Still Failing

2010-12-21 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-22-branch/5/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3369 lines...]
[junit] Running org.apache.hadoop.hdfs.TestFiHFlush
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 18.358 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHftp
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 41.517 sec
[junit] Running org.apache.hadoop.hdfs.TestFiPipelines
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.936 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
[junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 220.64 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 439.793 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.946 sec

checkfailure:

run-test-hdfs-excluding-commit-and-smoke:
   [delete] Deleting directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
   [delete] Deleting directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.fs.TestFiListPath
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.952 sec
[junit] Running org.apache.hadoop.fs.TestFiRename
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 6.082 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHFlush
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 18.589 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHftp
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 44.927 sec
[junit] Running org.apache.hadoop.hdfs.TestFiPipelines
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.904 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
[junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 220.621 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 414.015 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 36.207 sec

checkfailure:

run-test-hdfs-all-withtestcaseonly:

run-test-hdfs:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:725:
 Tests failed!

Total time: 104 minutes 47 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)


FAILED:  
org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer

Error Message:
Too many open files

Stack Trace:
java.io.IOException: Too many open files
at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
at sun.nio.ch.EPollArrayWrapper.(EPollArrayWrapper.java:68)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:52)
at 
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at java.nio.channels.Selector.open(Selector.java:209)
at org.apache.hadoop.ipc.Server$Responder.(Server.java:602)
at org.apache.hadoop.ipc.Server.(Server.java:1501)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:394)
at 
org.apache.hadoop.ipc.Writ