Hadoop-Hdfs-0.23-Build - Build # 54 - Still Unstable

2011-10-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/54/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 9749 lines...]
[ERROR] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
[ERROR] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
[ERROR] at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
[ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[ERROR] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[ERROR] at java.lang.reflect.Method.invoke(Method.java:597)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Publishing Clover coverage report...
Publishing Clover HTML report...
Publishing Clover XML report...
Publishing Clover coverage results...
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-3003
Updating MAPREDUCE-3304
Updating MAPREDUCE-3204
Updating MAPREDUCE-3295
Updating HADOOP-7737
Updating HDFS-2509
Updating HADOOP-7642
Updating MAPREDUCE-3183
Updating MAPREDUCE-3209
Updating HADOOP-7768
Updating HADOOP-7763
Updating HADOOP-7624
Updating HDFS-2294
Updating HDFS-2322
Updating MAPREDUCE-3306
Updating MAPREDUCE-3256
Updating HADOOP-7740
Updating HADOOP-7743
Updating HDFS-2465
Updating HDFS-2436
Updating HADOOP-7446
Updating MAPREDUCE-3014
Updating MAPREDUCE-3248
Updating MAPREDUCE-3171
Updating MAPREDUCE-3199
Updating MAPREDUCE-2775
Updating HDFS-2493
Updating HADOOP-7770
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount

Error Message:
org.apache.hadoop.hdfs.protocol.ExtendedBlock cannot be cast to 
java.lang.Comparable

Stack Trace:
java.lang.ClassCastException: org.apache.hadoop.hdfs.protocol.ExtendedBlock 
cannot be cast to java.lang.Comparable
at java.util.TreeMap.getEntry(TreeMap.java:325)
at java.util.TreeMap.containsKey(TreeMap.java:209)
at java.util.TreeSet.contains(TreeSet.java:217)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.__CLR3_0_29bdgm61b9n(TestNodeCount.java:114)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:232)
at junit.framework.TestSuite.run(TestSuite.java:227)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccess

Jenkins build is still unstable: Hadoop-Hdfs-0.23-Build #54

2011-10-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Hadoop-Hdfs-trunk #847

2011-10-29 Thread Apache Jenkins Server
See 

Changes:

[acmurthy] MAPREDUCE-3256. Added authorization checks for the protocol between 
NodeManager and ApplicationMaster. Contributed by Vinod K V.

[szetszwo] HDFS-2436. Change FSNamesystem.setTimes(..) for allowing setting 
times on directories.  Contributed by Uma Maheswara Rao G

[tomwhite] HADOOP-7763. Add top-level navigation to APT docs.

[szetszwo] HDFS-2509. Add a test for DistributedFileSystem.getFileChecksum(..) 
on directories or non existing files.  Contributed by Uma Maheswara Rao G

[todd] HADOOP-7446. Implement CRC32C native code using SSE4.2 instructions. 
Contributed by Kihwal Lee and Todd Lipcon.

[todd] HDFS-2465. Add HDFS support for fadvise readahead and drop-behind. 
Contributed by Todd Lipcon.

[suresh] HDFS-2499. RPC client is created incorrectly introduced in HDFS-2459. 
Contributed by Suresh Srinivas.

[suresh] HADOOP-7773. Add support for protocol buffer based RPC engine. 
Contributed by Suresh Srinivas.

[tucu] Adding missing CHANGES.txt entry for MAPREDUCE-3014

[jitendra] HADOOP-7770. ViewFS getFileChecksum throws FileNotFoundException for 
files in /tmp and /user. Contributed by Ravi Prakash.

[vinodkv] MAPREDUCE-3306. Fixed a bug in NodeManager ApplicationImpl that was 
causing NodeManager to crash. (vinodkv)

[acmurthy] MAPREDUCE-3304. Fixed intermittent test failure due to a race in 
TestRMContainerAllocator#testBlackListedNodes. Contributed by Ravi Prakash.

[szetszwo] HDFS-2493. Remove reference to FSNamesystem in blockmanagement 
classes.

[vinodkv] MAPREDUCE-2775. Fixed ResourceManager and NodeManager to force a 
decommissioned node to shutdown. Contributed by Devaraj K.

[eyang] HADOOP-7740. Fixed security audit logger configuration. (Arpit Gupta 
via Eric Yang)

--
[...truncated 16800 lines...]
  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 

  [javadoc] [loading 


Hadoop-Hdfs-trunk - Build # 847 - Still Failing

2011-10-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/847/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 16993 lines...]
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS Project 0.24.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [3:52.785s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.075s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 3:53.321s
[INFO] Finished at: Sat Oct 29 11:38:22 UTC 2011
[INFO] Final Memory: 61M/750M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-3304
Updating HDFS-2436
Updating HADOOP-7446
Updating HDFS-2459
Updating MAPREDUCE-3014
Updating HDFS-2509
Updating HADOOP-7763
Updating HDFS-2499
Updating HADOOP-7773
Updating MAPREDUCE-3306
Updating MAPREDUCE-3256
Updating HADOOP-7740
Updating MAPREDUCE-2775
Updating HDFS-2493
Updating HDFS-2465
Updating HADOOP-7770
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.testExcludedNodes

Error Message:
Cannot lock storage 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1.
 The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1.
 The directory is already locked.
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:371)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:314)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:298)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:332)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:458)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:450)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:642)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:546)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:262)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:86)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:248)
at 
org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.__CLR3_0_2l00f

[jira] [Created] (HDFS-2513) Bump jetty to 6.1.26

2011-10-29 Thread Konstantin Boudnik (Created) (JIRA)
Bump jetty to 6.1.26


 Key: HDFS-2513
 URL: https://issues.apache.org/jira/browse/HDFS-2513
 Project: Hadoop HDFS
  Issue Type: Task
  Components: build
Affects Versions: 0.22.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.22.0


HDFS part of Hadoop-7450

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2514) Link resolution bug for intermediate symlinks with relative targets

2011-10-29 Thread Eli Collins (Created) (JIRA)
Link resolution bug for intermediate symlinks with relative targets
---

 Key: HDFS-2514
 URL: https://issues.apache.org/jira/browse/HDFS-2514
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.21.0, 0.22.0, 0.23.0
Reporter: Eli Collins
Assignee: Eli Collins


There's a bug in the way the Namenode resolves intermediate symlinks (ie the 
symlink is not the final path component) in paths when the symlink's target is 
a relative path. Will post the full description in the first comment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: relative symbolic links in HDFS

2011-10-29 Thread Eli Collins
Hey Chuck,

Why is it problematic for your use that the symlink is stored in
FileStatus fully qualified - you'd like FileContext#getSymlink to
return the same Path that you used as the target in createSymlink?

The current behavior is so getFileLinkStatus is consistent with
getFileStatus(new Path("/some/file")) which returns a fully qualified
path (eg hdfs://myhost:123/some/file).   Note that you can use
FileContext#getLinkTarget to return the path used when creating the
link. Some more background is in the design doc:
https://issues.apache.org/jira/secure/attachment/12434745/design-doc-v4.txt

There's a jira for porting FsShell to FileContext (HADOOP-6424), if
you have a patch (even partial) feel free to post it to the jira.
Note that since symlinks are not implemented in FileSystem, clients
that use FileSystem to access paths with symlinks will fail.

Btw when looking at the code you pointed out I noticed a bug in link
resolution (HADOOP-7783), thanks!

Thanks,
Eli


On Fri, Oct 28, 2011 at 9:46 AM, Charles Baker  wrote:
> Hey guys. We are in the early stages of planning and evaluating a hadoop
> 'cold-storage' cluster for medium to long term storage of mixed data (small
> to large files, zips, tar, etc...) and tons of symlinks. We do realize that
> small files aren't ideal in HDFS but it's for long-term storage and beats the
> cost of more NetApps by potentially several hundred thousand dollars by
> leveraging existing equipment. We are already successfully using Hadoop and
> the MapReduce framework in a different project and have developed quite a bit
> of in-house expertise when it comes to Hadoop.
>
>
>
> Since this use-case is preserving and restoring an arbitrary directory
> structure, I have been evaluating 0.21.0's support of symlinks and found that
> although it happily creates relative symlinks, the code that is called to
> retrieve the symlink 'FileContext.getFileLinkStatus()' always converts the
> relative Path object to an absolute one through the use of the
> qualifySymlinkTarget() method. Though I was easily able to work around this
> limitation by changing the one line of code that calls this function from:
>
>
>
> fi.setSymlink(qualifySymlinkTarget(fs, p, fi.getSymlink()));
>
>
>
> to:
>
>
>
> fi.setSymlink(fi.getSymlink());
>
>
>
> It has made us curious as to why the decision was made to always return the
> absolute path of a symlink in the first place. Is it that attempts to open
> targets to relative symlinks throw exceptions and it saves having the user do
> the work to construct the absolute path since that's the general use-case? Or
> does this workaround violate some internal assumptions of the code or ideas
> about how a URI should behave (even though relative paths are implicitly
> supported by URI object)? Any insight you guys can shed on this would be
> great. I've tested the above change by adding support for symlinks (into and
> out of HDFS) into FsShell.copyToLocal() and copyFromLocal() using a mixed bag
> of relative and absolute symlinks and symlinks->symlinks and have so far
> found no ill effects.
>
>
>
> Thanks!
>
>
>
> -Chuck
>
>
>
> 
> 
> http://www.sdl.com/sdl-vision";> src="http://www.sdl.com/images/email_new_logo.png"; 
> alt="www.sdl.com/sdl-vision" border="0"/>
> 
> http://www.sdl.com/sdl-vision"; 
> style="color:005740; font-weight: bold">www.sdl.com/sdl-vision
> 
> 
> 
> SDL PLC confidential, all rights reserved.
> If you are not the intended recipient of this mail SDL requests and requires 
> that you delete it without acting upon or copying any of its contents, and we 
> further request that you advise us.
> SDL PLC is a public limited company registered in England and Wales.  
> Registered number: 02675207.
> Registered address: Globe House, Clivemont Road, Maidenhead, Berkshire SL6 
> 7DY, UK.
> 
>