[jira] Created: (HDFS-1557) Separate Storage from FSImage

2010-12-23 Thread Ivan Kelly (JIRA)
Separate Storage from FSImage
-

 Key: HDFS-1557
 URL: https://issues.apache.org/jira/browse/HDFS-1557
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 0.21.0
Reporter: Ivan Kelly
 Fix For: 0.22.0


FSImage currently derives from Storage and FSEditLog has to call methods 
directly on FSImage to access the filesystem. This JIRA is to separate the 
Storage class out into NNStorage so that FSEditLog is less dependent on 
FSImage. From this point, the other parts of the circular dependency should be 
easy to fix.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Hadoop-Hdfs-21-Build - Build # 139 - Still Failing

2010-12-23 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-21-Build/139/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3131 lines...]
[ivy:resolve]   found javax.servlet#jstl;1.1.2 in default
[ivy:resolve]   found taglibs#standard;1.1.2 in maven2
[ivy:resolve]   found junitperf#junitperf;1.8 in maven2
[ivy:resolve] :: resolution report :: resolve 1009ms :: artifacts dl 40ms
[ivy:resolve]   :: evicted modules:
[ivy:resolve]   commons-logging#commons-logging;1.0.4 by 
[commons-logging#commons-logging;1.1.1] in [common]
[ivy:resolve]   commons-codec#commons-codec;1.2 by 
[commons-codec#commons-codec;1.4] in [common]
[ivy:resolve]   commons-logging#commons-logging;1.0.3 by 
[commons-logging#commons-logging;1.1.1] in [common]
[ivy:resolve]   commons-codec#commons-codec;1.3 by 
[commons-codec#commons-codec;1.4] in [common]
[ivy:resolve]   org.slf4j#slf4j-api;1.5.2 by [org.slf4j#slf4j-api;1.5.11] in 
[common]
[ivy:resolve]   org.apache.mina#mina-core;2.0.0-M4 by 
[org.apache.mina#mina-core;2.0.0-M5] in [common]
[ivy:resolve]   org.apache.ftpserver#ftplet-api;1.0.0-M2 by 
[org.apache.ftpserver#ftplet-api;1.0.0] in [common]
[ivy:resolve]   org.apache.ftpserver#ftpserver-core;1.0.0-M2 by 
[org.apache.ftpserver#ftpserver-core;1.0.0] in [common]
[ivy:resolve]   org.apache.mina#mina-core;2.0.0-M2 by 
[org.apache.mina#mina-core;2.0.0-M5] in [common]
[ivy:resolve]   commons-lang#commons-lang;2.3 by 
[commons-lang#commons-lang;2.5] in [common]
-
|  |modules||   artifacts   |
|   conf   | number| search|dwnlded|evicted|| number|dwnlded|
-
|  common  |   62  |   2   |   0   |   10  ||   52  |   0   |
-

ivy-retrieve-common:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#hdfsproxy [sync]
[ivy:retrieve]  confs: [common]
[ivy:retrieve]  0 artifacts copied, 52 already retrieved (0kB/10ms)
[ivy:cachepath] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/ivy/ivysettings.xml

compile:
 [echo] contrib: hdfsproxy

compile-examples:

compile-test:
 [echo] contrib: hdfsproxy
[javac] Compiling 9 source files to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/contrib/hdfsproxy/test

test-junit:
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.hdfsproxy.TestHdfsProxy
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 3.348 sec
[junit] Test org.apache.hadoop.hdfsproxy.TestHdfsProxy FAILED

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build.xml:722: 
The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build.xml:703: 
The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/src/contrib/build.xml:48:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/src/contrib/hdfsproxy/build.xml:260:
 Tests failed!

Total time: 52 minutes 31 seconds
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Description set: 
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed


Hadoop-Hdfs-trunk - Build # 530 - Still Failing

2010-12-23 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/530/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 66 lines...]
[junit] 2010-12-23 12:36:36,620 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-23 12:36:36,620 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-23 12:36:36,735 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 53369
[junit] 2010-12-23 12:36:36,736 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 0 on 53369: exiting
[junit] 2010-12-23 12:36:36,736 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 1
[junit] 2010-12-23 12:36:36,736 WARN  datanode.DataNode 
(DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:39149, 
storageID=DS-2023999281-127.0.1.1-39149-1293107785734, infoPort=33426, 
ipcPort=53369):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 
[junit] 2010-12-23 12:36:36,737 INFO  ipc.Server (Server.java:run(675)) - 
Stopping IPC Server Responder
[junit] 2010-12-23 12:36:36,737 INFO  ipc.Server (Server.java:run(475)) - 
Stopping IPC Server listener on 53369
[junit] 2010-12-23 12:36:36,738 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-23 12:36:36,783 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-23 12:36:36,839 INFO  datanode.DataNode 
(DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:39149, 
storageID=DS-2023999281-127.0.1.1-39149-1293107785734, infoPort=33426, 
ipcPort=53369):Finishing DataNode in: 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-23 12:36:36,839 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 53369
[junit] 2010-12-23 12:36:36,839 INFO  datanode.DataNode 
(DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2010-12-23 12:36:36,840 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2010-12-23 12:36:36,840 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2010-12-23 12:36:36,840 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2010-12-23 12:36:36,843 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2822)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-23 12:36:36,843 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-23 12:36:36,843 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(631)) - Number of transactions: 6 Total time 
for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of 
syncs: 3 SyncTimes(ms): 7 2 
[junit] 2010-12-23 12:36:36,845 INFO  ipc.Server (Server.java:stop(1611)) - 
Stopping server on 57567
[junit] 2010-12-23 12:36:36,845 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 6 on 57567: exiting
[junit] 2010-12-23 12:36:36,845 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 3 on 57567: exiting
[junit] 2010-12-23 12:36:36,846 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 7 on 57567: exiting
[junit] 2010-12-23 12:36:36,845 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 4 on 57567: exiting
[junit] 2010-12-23 12:36:36,846 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 9 on 57567: exiting
[junit] 2010-12-23 12:36:36,847 INFO  ipc.Server (Server.java:run(1444)) - 
IPC Server handler 1 on 57567: exiting
[junit] 2010-12-23 12:36:36,845 INFO  ipc.Server (Server.java:run(6

[jira] Resolved: (HDFS-1549) ArrayIndexOutOfBoundsException throwed from BlockLocation

2010-12-23 Thread Min Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Zhou resolved HDFS-1549.


Resolution: Not A Problem

Since the bug has been fixed in trunk, I should close this issue.

> ArrayIndexOutOfBoundsException throwed from BlockLocation 
> --
>
> Key: HDFS-1549
> URL: https://issues.apache.org/jira/browse/HDFS-1549
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Min Zhou
> Fix For: 0.22.0
>
>
> BlockLocation object created through the default constructor  has a hosts 
> array with its length of zero.  It will apparently throw an 
> ArrayIndexOutOfBoundsException when reading fields from DataInput if not 
> resized the array length.
> Exception in thread "IPC Client (47) connection to nn151/192.168.201.151:9020 
> from zhoumin" java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.hadoop.fs.BlockLocation.readFields(BlockLocation.java:177)
> at 
> org.apache.hadoop.fs.LocatedFileStatus.readFields(LocatedFileStatus.java:85)
> at 
> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:237)
> at 
> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:171)
> at 
> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:219)
> at 
> org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:509)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:439)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Hadoop-Hdfs-22-branch - Build # 6 - Still Failing

2010-12-23 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-22-branch/6/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3003 lines...]
[junit] Running org.apache.hadoop.hdfs.server.common.TestDistributedUpgrade
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 31.453 sec
[junit] Running org.apache.hadoop.hdfs.server.common.TestGetUriFromString
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.151 sec
[junit] Running org.apache.hadoop.hdfs.server.common.TestJspHelper
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.79 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestBlockReport
[junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 49.927 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMXBean
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.429 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 29.054 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.484 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 71.635 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDataXceiver
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.557 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeRestart
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 5.804 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestReplicasMap
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.085 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestWriteToReplica
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 5.204 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestAllowFormat
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.273 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestBlockTokenWithDFS
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 35.492 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 3.377 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestBlocksWithNotEnoughRacks
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 7.921 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.338 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 8.212 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestCorruptReplicaInfo
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.228 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 16.29 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 17.879 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 30.382 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestFsck
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 34.835 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 6.63 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
Build timed out. Aborting
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
TEST-org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete.xml.

Error Message:


Stack Trace:
Test report file 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/test/TEST-org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete.xml
 was length 0

FAILED:  
org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

Error Message:
Too

[jira] Created: (HDFS-1558) Optimize FSNamesystem.startFileInternal

2010-12-23 Thread Dmytro Molkov (JIRA)
Optimize FSNamesystem.startFileInternal
---

 Key: HDFS-1558
 URL: https://issues.apache.org/jira/browse/HDFS-1558
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Dmytro Molkov
Assignee: Dmytro Molkov
Priority: Minor
 Fix For: 0.23.0


Currently on file creation inside of FSNamesystem.startFileInternal there are 
three calls to FSDirectory that are essentially the same:

dir.exists(src)
dir.isDir(src)
dir.getFileInode(src)

All of them have to fetch the inode and then do some processing on it.
If instead we were to fetch the inode once and then do all of the processing on 
this INode object it would save us two trips through the namespace + 2 calls to 
normalizePath all of which are relatively expensive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.