Hadoop-Hdfs-trunk - Build # 584 - Still Failing

2011-02-16 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/584/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 694186 lines...]
[junit] 2011-02-16 12:33:33,046 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2011-02-16 12:33:33,046 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
[junit] 2011-02-16 12:33:33,048 INFO  ipc.Server (Server.java:stop(1622)) - 
Stopping server on 45109
[junit] 2011-02-16 12:33:33,048 INFO  ipc.Server (Server.java:run(1455)) - 
IPC Server handler 0 on 45109: exiting
[junit] 2011-02-16 12:33:33,049 INFO  ipc.Server (Server.java:run(485)) - 
Stopping IPC Server listener on 45109
[junit] 2011-02-16 12:33:33,049 INFO  datanode.DataNode 
(DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads 
is 1
[junit] 2011-02-16 12:33:33,049 WARN  datanode.DataNode 
(DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:40341, 
storageID=DS-622966542-127.0.1.1-40341-1297859602463, infoPort=55796, 
ipcPort=45109):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit] 
[junit] 2011-02-16 12:33:33,049 INFO  ipc.Server (Server.java:run(687)) - 
Stopping IPC Server Responder
[junit] 2011-02-16 12:33:33,051 INFO  datanode.DataNode 
(DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2011-02-16 12:33:33,152 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2011-02-16 12:33:33,152 INFO  datanode.DataNode 
(DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:40341, 
storageID=DS-622966542-127.0.1.1-40341-1297859602463, infoPort=55796, 
ipcPort=45109):Finishing DataNode in: 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2011-02-16 12:33:33,152 INFO  ipc.Server (Server.java:stop(1622)) - 
Stopping server on 45109
[junit] 2011-02-16 12:33:33,152 INFO  datanode.DataNode 
(DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2011-02-16 12:33:33,153 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-02-16 12:33:33,153 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-02-16 12:33:33,153 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2011-02-16 12:33:33,255 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2847)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2011-02-16 12:33:33,255 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2011-02-16 12:33:33,256 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time 
for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of 
syncs: 3 SyncTimes(ms): 8 4 
[junit] 2011-02-16 12:33:33,257 INFO  ipc.Server (Server.java:stop(1622)) - 
Stopping server on 39710
[junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(1455)) - 
IPC Server handler 0 on 39710: exiting
[junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(1455)) - 
IPC Server handler 4 on 39710: exiting
[junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(1455)) - 
IPC Server handler 3 on 39710: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.347 sec
[junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(687)) - 
Stopping IPC Server Responder
[junit] 2011-02-16 12:33:33,269 INFO  ipc.Server (Server.java:run(1455)) - 
IPC Server handler 7 on 39710: exiting
[junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(1455)) - 
IPC Server handler 9 on 39710: exiting
[jun

[jira] Created: (HDFS-1630) Checksum fsedits

2011-02-16 Thread Hairong Kuang (JIRA)
Checksum fsedits


 Key: HDFS-1630
 URL: https://issues.apache.org/jira/browse/HDFS-1630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Hairong Kuang
Assignee: Hairong Kuang


HDFS-903 calculates a MD5 checksum to a saved image, so that we could verify 
the integrity of the image at the loading time.

The other half of the story is how to verify fsedits. Similarly we could use 
the checksum approach. But since a fsedit file is growing constantly, a 
checksum per file does not work. I am thinking to add a checksum per 
transaction. Is it doable or too expensive?

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




Ant Build Error in Hadoop 0.20.1

2011-02-16 Thread Ganesh Ananthanarayanan
Hi,

I am trying to ant compile build.xml with eclipse, on a Windows 7 machine
that has cygwin installed on it..

I get the following error:

[javac]
C:\Users\Ganesh\workspace\Hadoop\build\src\org\apache\hadoop\package-info.java:4:
unclosed string literal
[javac] @HadoopVersionAnnotation(version="0.20.2-dev", revision="812594
[javac] ^
[javac]
C:\Users\Ganesh\workspace\Hadoop\build\src\org\apache\hadoop\package-info.java:5:
unclosed string literal
[javac] ",
[javac] ^
[javac]
C:\Users\Ganesh\workspace\Hadoop\build\src\org\apache\hadoop\package-info.java:6:
class, interface, or enum expected
[javac]  user="ganesh-laptopganesh
[javac]  ^
[javac]
C:\Users\Ganesh\workspace\Hadoop\build\src\org\apache\hadoop\package-info.java:6:
unclosed string literal
[javac]  user="ganesh-laptopganesh
[javac]   ^
[javac]
C:\Users\Ganesh\workspace\Hadoop\build\src\org\apache\hadoop\package-info.java:8:
unclosed string literal
[javac] ")
[javac] ^


These are the contents of package-info.java:

/*
 * Generated by src/saveVersion.sh
 */
@HadoopVersionAnnotation(version="0.20.2-dev", revision="812594
",
 user="ganesh-laptopganesh
", date="Wed Feb 16 15:24:13 PST 2011", url="
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1
")
package org.apache.hadoop;


Just fixing package-info.java obviously didn't work because it gets
generated by saveVersion.sh.

Does someone know the reason and the fix?

Thanks a lot!

-Ganesh