Hadoop-Hdfs-trunk - Build # 608 - Still Failing

2011-03-16 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/608/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 686601 lines...]
[junit] 2011-03-16 12:31:20,198 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
[junit] 2011-03-16 12:31:20,299 INFO  ipc.Server (Server.java:stop(1624)) - 
Stopping server on 33960
[junit] 2011-03-16 12:31:20,299 INFO  ipc.Server (Server.java:run(1457)) - 
IPC Server handler 0 on 33960: exiting
[junit] 2011-03-16 12:31:20,300 INFO  ipc.Server (Server.java:run(485)) - 
Stopping IPC Server listener on 33960
[junit] 2011-03-16 12:31:20,300 INFO  ipc.Server (Server.java:run(689)) - 
Stopping IPC Server Responder
[junit] 2011-03-16 12:31:20,300 WARN  datanode.DataNode 
(DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:46068, 
storageID=DS-1231135676-127.0.1.1-46068-1300278669382, infoPort=34367, 
ipcPort=33960):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
[junit] at 
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit] 
[junit] 2011-03-16 12:31:20,300 INFO  datanode.DataNode 
(DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2011-03-16 12:31:20,398 INFO  datanode.DataBlockScanner 
(DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2011-03-16 12:31:20,401 INFO  datanode.DataNode 
(DataNode.java:run(1462)) - DatanodeRegistration(127.0.0.1:46068, 
storageID=DS-1231135676-127.0.1.1-46068-1300278669382, infoPort=34367, 
ipcPort=33960):Finishing DataNode in: 
FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2011-03-16 12:31:20,401 INFO  ipc.Server (Server.java:stop(1624)) - 
Stopping server on 33960
[junit] 2011-03-16 12:31:20,401 INFO  datanode.DataNode 
(DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads 
is 0
[junit] 2011-03-16 12:31:20,401 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk 
service threads...
[junit] 2011-03-16 12:31:20,402 INFO  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads 
have been shut down.
[junit] 2011-03-16 12:31:20,402 WARN  datanode.FSDatasetAsyncDiskService 
(FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already 
shut down.
[junit] 2011-03-16 12:31:20,504 WARN  namenode.FSNamesystem 
(FSNamesystem.java:run(2854)) - ReplicationMonitor thread received 
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2011-03-16 12:31:20,504 INFO  namenode.FSEditLog 
(FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time 
for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of 
syncs: 3 SyncTimes(ms): 6 4 
[junit] 2011-03-16 12:31:20,504 WARN  namenode.DecommissionManager 
(DecommissionManager.java:run(70)) - Monitor interrupted: 
java.lang.InterruptedException: sleep interrupted
[junit] 2011-03-16 12:31:20,505 INFO  ipc.Server (Server.java:stop(1624)) - 
Stopping server on 52365
[junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - 
IPC Server handler 0 on 52365: exiting
[junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - 
IPC Server handler 5 on 52365: exiting
[junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - 
IPC Server handler 6 on 52365: exiting
[junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - 
IPC Server handler 9 on 52365: exiting
[junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - 
IPC Server handler 4 on 52365: exiting
[junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - 
IPC Server handler 3 on 52365: exiting
[junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - 
IPC Server handler 1 on 52365: exiting
[junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - 
IPC Server handler 2 on 52365: exiting
[junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(485)) - 
Stopping IPC Server listener on 52365
[junit] 2011-03-16 12:31:20

[jira] Created: (HDFS-1759) Improve error message when starting secure DN without jsvc

2011-03-16 Thread Todd Lipcon (JIRA)
Improve error message when starting secure DN without jsvc
--

 Key: HDFS-1759
 URL: https://issues.apache.org/jira/browse/HDFS-1759
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Trivial
 Fix For: 0.23.0


Users find the current message "Cannot start secure cluster without privileged 
resources." to be confusing -- it's not clear what actions to take to correct 
the error.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Created: (HDFS-1760) problems with getFullPathName

2011-03-16 Thread Daryn Sharp (JIRA)
problems with getFullPathName
-

 Key: HDFS-1760
 URL: https://issues.apache.org/jira/browse/HDFS-1760
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.0


FSDirectory's getFullPathName method is flawed.  Given a list of inodes, it 
starts at index 1 instead of 0 (based on the assumption that inode[0] is always 
the root inode) and then builds the string with "/"+inode[i].  This means the 
empty string is returned for the root, or when requesting the full path of the 
parent dir for top level items.

In addition, it's not guaranteed that the list of inodes starts with the root 
inode.  The inode lookup routine will only fill the inode array with the last 
n-many inodes of a path if the array is smaller than the path.  In these cases, 
getFullPathName will skip the first component of the relative path, and then 
assume the second component starts at the root.  ex. "a/b/c" becomes "/b/c".

There are a few places in the code where the issue was hacked around by 
assuming that a 0-length path meant a hardcoded "/" instead of Path.SEPARATOR.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Resolved: (HDFS-1755) Hdfs Federation: The BPOfferService must always connect to namenode as the login user.

2011-03-16 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey resolved HDFS-1755.


  Resolution: Fixed
Release Note: Committed to the branch.
Hadoop Flags: [Reviewed]

> Hdfs Federation: The BPOfferService must always connect to namenode as the 
> login user.
> --
>
> Key: HDFS-1755
> URL: https://issues.apache.org/jira/browse/HDFS-1755
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: Federation Branch
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: Federation Branch
>
> Attachments: HDFS-1755.2.patch, HDFS-1755.3.patch
>
>
>   The BPOfferService thread when started, normally connects to the namenode 
> as the login user. But in case of refreshNamenodes a new BPOfferService may 
> be started in the context of an rpc call where the user is dfsadmin. In such 
> a case the connection to the namenode will fail. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Merging Namenode Federation feature (HDFS-1052) to trunk

2011-03-16 Thread Konstantin Shvachko
On Mon, Mar 14, 2011 at 11:19 PM, suresh srinivas wrote:

> Thanks for starting off the discussion.
>
> > This is a huge new feature with 86 jiras already filed, which
> substantially increases the complexity of the code base.
> These are 86 jiras file in a feature branch. We decided to make these
> changes, in smaller increments, instead of a jumbo patch. This was done in
> good faith, as community did not want a jumbo patch (as seen in several
> discussions), to make reviewing of the patch easy and to record the changes
> for reference.
>

Thanks for doing it that way.


> > Having an in-depth motivation and benchmarking will be needed before the
> community decides on adopting it for support.
> This comes as a surprise, especially from Konstantin :-). The first part of
> the proposal and design both cover motivation.
>

That is a different motivation. The document talks about why you should use
federation. I am asking about motivation of supporting the code base while
not using it. At least this is how understand Allen's question and some of
my
colleagues'.

So far our tests show no difference with federation.
>

This is exactly what is needed.
If you could put some numbers in the jira for the reference.

Also it is interesting to know whether there is a benefit in splitting
the namespace. Can I e.g. do more getBlockLocations per second?
This is one of the aspects of scaling, right?


> As we developed this feature, some significant improvements have been made
> to the system - fast snapshots (snapshot time down from 1hr 45 mins to 1
> min!), fast startup, cleanup of storage, fixing multi threading issues in
> several places, decommissioning improvements etc.
>

This is motivation. I am glad I asked.


> > The purpose of my reply was to get this discussion going, as I found
> Allens question unanswered for 2 weeks.
> My email was sent on March 3rd. Allen's email was sent on March 12th.
>

Sorry, my bad.


> > The concern he has seems legitimate to me. If ops think federation will
> "make running a grid much much harder" I want to know why and how much
> harder.
> I would like to understand the concerns here. Allen please add details.
>
> > The way I see it now, Federation introduces
> > - lots of code complexity to the system
> > - harder manageability, according to Allen
> > - potential performance degradation (tbd)
> I have addressed these already.
>
> > And the main question for those 95% of users, who don't run large
> clusters
> or don't want to place all their compute resources in one data center, is
> what is the advantage in supporting it?
> This is a valid concern. Hence the single namenode configuration that most
> installations run today, will run as is. We put a lot of development and
> testing effort to ensure this.
>

I don't know what you mean by "as is". My experience with this word in real
estate tells me it can be anything.


Hadoop-Hdfs-22-branch - Build # 29 - Still Failing

2011-03-16 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-22-branch/29/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3320 lines...]

compile-hdfs-test:
   [delete] Deleting directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache

run-test-hdfs-excluding-commit-and-smoke:
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.fs.TestFiListPath
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 2.207 sec
[junit] Running org.apache.hadoop.fs.TestFiRename
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 5.608 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHFlush
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 15.64 sec
[junit] Running org.apache.hadoop.hdfs.TestFiHftp
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 43.384 sec
[junit] Running org.apache.hadoop.hdfs.TestFiPipelines
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.501 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
[junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 210.721 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 416.025 sec
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.602 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:745:
 Tests failed!

Total time: 50 minutes 44 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0

Error Message:
127.0.0.1:59191is not an underUtilized node

Stack Trace:
junit.framework.AssertionFailedError: 127.0.0.1:59191is not an underUtilized 
node
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1011)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1496)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247)
at 
org.apache.hadoop.hdfs.serv

[jira] Created: (HDFS-1761) TestTransferRbw fails

2011-03-16 Thread Tsz Wo (Nicholas), SZE (JIRA)
TestTransferRbw fails
-

 Key: HDFS-1761
 URL: https://issues.apache.org/jira/browse/HDFS-1761
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


It failed in [build 
#264|https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/264/testReport/org.apache.hadoop.hdfs.server.datanode/TestTransferRbw/testTransferRbw/]
{noformat}
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = 
numBytes < visible = 90564, r=ReplicaInPipeline, blk_8440021909252053811_1001, 
TEMPORARY
  getNumBytes() = 65536
  getBytesOnDisk()  = 0
  getVisibleLength()= -1
  getVolume()   = 
/grid/0/hudson/hudson-slave/workspace/PreCommit-HDFS-Build/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()= 
/grid/0/hudson/hudson-slave/workspace/PreCommit-HDFS-Build/trunk/build/test/data/dfs/data/data3/tmp/blk_8440021909252053811
  bytesAcked=0
  bytesOnDisk=0
at 
org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1383)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2023)
at 
org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:121)
{noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Merging Namenode Federation feature (HDFS-1052) to trunk

2011-03-16 Thread suresh srinivas
That is a different motivation. The document talks about why you should use
> federation. I am asking about motivation of supporting the code base while
> not using it. At least this is how understand Allen's question and some of
> my colleagues'.
>

Namenode code is not changed at all. Datanode code changes to add the notion
of block pool and a thread per NN. For a single NN, datanode is equivalent
to the current datanode. If you argue that there should not be any code
change - not sure how features like this can be added to HDFS. There is no
change from user perspective and performance of the system. No additional
complexity from the existing system.


> If you could put some numbers in the jira for the reference.
>
Will do.


>
> Also it is interesting to know whether there is a benefit in splitting
> the namespace. Can I e.g. do more getBlockLocations per second?
> This is one of the aspects of scaling, right?
>

I do not understand your question. This feature does not scale
getBlockLocations per second for a single NN. When you use many NNs, total
requests per second does scale for the entire cluster.

> As we developed this feature, some significant improvements have been made
> to the system - fast snapshots (snapshot time down from 1hr 45 mins to 1
> min!), fast startup, cleanup of storage, fixing multi threading issues in
> several places, decommissioning improvements etc.
>

> This is a valid concern. Hence the single namenode configuration that most
> > installations run today, will run as is. We put a lot of development and
> > testing effort to ensure this.
> >
>
> I don't know what you mean by "as is". My experience with this word in real
> estate tells me it can be anything.
>

I used the word with following meaning:
http://www.merriam-webster.com/dictionary/as%20is
— *as is*
*:* in the presently existing condition without modification


Re: Merging Namenode Federation feature (HDFS-1052) to trunk

2011-03-16 Thread suresh srinivas
> Namenode code is not changed at all.
Want to make sure I qualify this right. The change is not significant, other
than notion of BPID that the NN uses is added.


[jira] Created: (HDFS-1762) Allow TestHDFSCLI to be run against a cluster

2011-03-16 Thread Tom White (JIRA)
Allow TestHDFSCLI to be run against a cluster
-

 Key: HDFS-1762
 URL: https://issues.apache.org/jira/browse/HDFS-1762
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Tom White


Currently TestHDFSCLI starts mini clusters to run tests against. It would be 
useful to be able to support running against arbitrary clusters for testing 
purposes.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-21-Build - Build # 148 - Still Failing

2011-03-16 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-21-Build/148/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3118 lines...]
[ivy:resolve]   found javax.servlet#jstl;1.1.2 in maven2
[ivy:resolve]   found taglibs#standard;1.1.2 in maven2
[ivy:resolve]   found junitperf#junitperf;1.8 in maven2
[ivy:resolve] :: resolution report :: resolve 528ms :: artifacts dl 15ms
[ivy:resolve]   :: evicted modules:
[ivy:resolve]   commons-logging#commons-logging;1.0.4 by 
[commons-logging#commons-logging;1.1.1] in [common]
[ivy:resolve]   commons-codec#commons-codec;1.2 by 
[commons-codec#commons-codec;1.4] in [common]
[ivy:resolve]   commons-logging#commons-logging;1.0.3 by 
[commons-logging#commons-logging;1.1.1] in [common]
[ivy:resolve]   commons-codec#commons-codec;1.3 by 
[commons-codec#commons-codec;1.4] in [common]
[ivy:resolve]   org.slf4j#slf4j-api;1.5.2 by [org.slf4j#slf4j-api;1.5.11] in 
[common]
[ivy:resolve]   org.apache.mina#mina-core;2.0.0-M4 by 
[org.apache.mina#mina-core;2.0.0-M5] in [common]
[ivy:resolve]   org.apache.ftpserver#ftplet-api;1.0.0-M2 by 
[org.apache.ftpserver#ftplet-api;1.0.0] in [common]
[ivy:resolve]   org.apache.ftpserver#ftpserver-core;1.0.0-M2 by 
[org.apache.ftpserver#ftpserver-core;1.0.0] in [common]
[ivy:resolve]   org.apache.mina#mina-core;2.0.0-M2 by 
[org.apache.mina#mina-core;2.0.0-M5] in [common]
[ivy:resolve]   commons-lang#commons-lang;2.3 by 
[commons-lang#commons-lang;2.5] in [common]
-
|  |modules||   artifacts   |
|   conf   | number| search|dwnlded|evicted|| number|dwnlded|
-
|  common  |   62  |   2   |   0   |   10  ||   52  |   0   |
-

ivy-retrieve-common:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#hdfsproxy [sync]
[ivy:retrieve]  confs: [common]
[ivy:retrieve]  0 artifacts copied, 52 already retrieved (0kB/7ms)
[ivy:cachepath] :: loading settings :: file = 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/ivy/ivysettings.xml

compile:
 [echo] contrib: hdfsproxy

compile-examples:

compile-test:
 [echo] contrib: hdfsproxy
[javac] Compiling 9 source files to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build/contrib/hdfsproxy/test

test-junit:
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.hdfsproxy.TestHdfsProxy
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 42.759 sec
[junit] Test org.apache.hadoop.hdfsproxy.TestHdfsProxy FAILED

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build.xml:731: 
The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/build.xml:712: 
The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/src/contrib/build.xml:48:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-21-Build/trunk/src/contrib/hdfsproxy/build.xml:260:
 Tests failed!

Total time: 212 minutes 40 seconds
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Description set: 
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed


[jira] Created: (HDFS-1763) Replace hard-coded option strings with variables from DFSConfigKeys

2011-03-16 Thread Eli Collins (JIRA)
Replace hard-coded option strings with variables from DFSConfigKeys
---

 Key: HDFS-1763
 URL: https://issues.apache.org/jira/browse/HDFS-1763
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.23.0


There are some places in the code where we use hard-coded strings instead of 
the equivalent DFSConfigKeys define, and a couple places where the default is 
defined multiple places (once in DFSConfigKeys and once elsewhere, though both 
have the same value). This is error-prone, and also a pain in that it prevents 
eclipse from easily showing you all the places where a particular config option 
is used. Let's replace all the uses of the hard-coded option strings with uses 
of the corresponding variables in DFSConfigKeys.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira