Jenkins build is back to normal : Hadoop-Hdfs-0.23-Build #520

2013-02-09 Thread Apache Jenkins Server
See 



Hadoop-Hdfs-trunk - Build # 1311 - Still Failing

2013-02-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1311/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 13847 lines...]
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.794 sec
Running org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.314 sec

Results :

Tests in error: 
  
testOperation[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)
  
testOperationDoAs[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)

Tests run: 283, Failures: 0, Errors: 2, Skipped: 0

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:26:48.560s]
[INFO] Apache Hadoop HttpFS .. FAILURE [1:44.178s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:28:33.525s
[INFO] Finished at: Sat Feb 09 13:02:03 UTC 2013
[INFO] Final Memory: 54M/866M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs-httpfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-httpfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-362
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1311

2013-02-09 Thread Apache Jenkins Server
See 

Changes:

[jlowe] YARN-362. Unexpected extra results when using webUI table search. 
Contributed by Ravi Prakash

--
[...truncated 13654 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-httpfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-httpfs ---
[INFO] Compiling 56 source files to 

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-web-xmls) @ hadoop-hdfs-httpfs 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

 [copy] Copying 1 file to 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-httpfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-httpfs ---
[INFO] Compiling 46 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ 
hadoop-hdfs-httpfs ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.test.TestDirHelper
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.048 sec
Running org.apache.hadoop.test.TestJettyHelper
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.052 sec
Running org.apache.hadoop.test.TestHdfsHelper
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.test.TestHTestCase
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.189 sec
Running org.apache.hadoop.test.TestExceptionHelper
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec
Running org.apache.hadoop.test.TestHFSTestCase
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.909 sec
Running org.apache.hadoop.lib.service.instrumentation.TestInstrumentationService
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.64 sec
Running org.apache.hadoop.lib.service.scheduler.TestSchedulerService
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec
Running org.apache.hadoop.lib.service.security.TestProxyUserService
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.697 sec
Running org.apache.hadoop.lib.service.security.TestDelegationTokenManagerService
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.735 sec
Running org.apache.hadoop.lib.service.security.TestGroupsService
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec
Running org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.003 sec
Running org.apache.hadoop.lib.server.TestServerConstructor
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.113 sec
Running org.apache.hadoop.lib.server.TestServer
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.401 sec
Running org.apache.hadoop.lib.server.TestBaseService
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.269 sec
Running org.apache.hadoop.lib.lang.TestRunnableCallable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 sec
Running org.apache.hadoop.lib.lang.TestXException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.057 sec
Running org.apache.hadoop.lib.wsrs.TestParam
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.063 sec
Running org.apache.hadoop.lib.wsrs.TestInputStreamEntity
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec
Running org.apache.hadoop.lib.wsrs.TestJSONProvider
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec
Running org.apache.hadoop.lib.wsrs.TestJSONMapProvider
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec
Running org.

RE: [Hadoop]Environment variable CLASSPATH not set!

2013-02-09 Thread John Gordon
Env variables hang off of the session context and are specific to both the user 
profile and their shell-specific preferences.  If your driver is loading in 
kernel mode, it cannot depend on env variables.

This will be a problem for the other environment variables like hadoop_home.

Instead of using Java directly in kernel mode, I suggest splitting the problem:
1. fs abstraction for the kernel
   a. Like the nfs filesystem kernel driver implementation for example -- a 
remote mount fs.
   b. use a c impl of the protocol
  I. To avoid issues, use hadoop 2.0 for protobuffs, since they yield a 
versioned protocol to avoid hangs and dumps when the protocol changes.
  II.  OR push most of your implementation into a proxy service
  a. Surface NFS directly, and just use the nfs kernel driver
  b. Surface your own protocol to be consumed in the kernel mode driver.
2.  Start hdfs elsewhere, as a independent service in user mode like cups, 
httpd, or xinetd.
a.  Will have a session and the ability to configure env vars.


Not sure if that exactly answers the question, but I hope it was helpful.

John

Sent from my Windows Phone

From: harryxiyou
Sent: ‎2/‎9/‎2013 5:35 AM
To: hdfs-dev@hadoop.apache.org
Cc: Kang Hua; 
clou...@googlegroups.com
Subject: [Hadoop]Environment variable CLASSPATH not set!

Hi all,

We are developing  a hdfs-based File system, which is HLFS(
http://code.google.com/p/cloudxy/wiki/WHAT_IS_CLOUDXY). Now, we
have developed HLFS driver for Libvirt(http://libvirt.org/). But when i
boot a VM from a base linux OS, which the OS have been installed into our
HLFS block device at first. However, it(HDFS or JVM) says i have not set the
CLASSPATH like following.

[...]
uri:hdfs:///tmp/testenv/testfs,head:hdfs,dir:/tmp/testenv,fsname:testfs,hostname:default,port:0,user:kanghua
Environment variable CLASSPATH not set!
^
fs is null, hdfsConnect error!
[...]




Actually, i have set CLASSPATH in ~/.bashrc like following. I have
installed CDH3u2
for developing. I can do other hdfs jobs successfully.

$ cat /home/jiawei/.bashrc
[...]
export HLFS_HOME=/home/jiawei/workshop3/hlfs
export LOG_HOME=$HLFS_HOME/3part/log
export SNAPPY_HOME=$HLFS_HOME/3part/snappy
export HADOOP_HOME=$HLFS_HOME/3part/hadoop
export JAVA_HOME=/usr/lib/jvm/java-6-sun
export PATH=/usr/bin/:/usr/local/bin/:/bin/:/usr/sbin/:/sbin/:$JAVA_HOME/bin/
#export LD_LIBRARY_PATH=$JAVAHOME/lib
export 
LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/i386/server/:$HADOOP_HOME/lib32/:$LOG_HOME/lib32/:$SNAPPY_HOME/lib32/:$HLFS_HOME/output/lib32/:/usr/lib/
export PKG_CONFIG_PATH=/usr/lib/pkgconfig/:/usr/share/pkgconfig/
export CFLAGS="-L/usr/lib -L/lib -L/usr/lib64"
export CXXFLAGS="-L/usr/lib -L/lib"
export 
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/htmlconverter.jar:$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/lib/charsets.jar:$JAVA_HOME/jre/lib/deploy.jar:$JAVA_HOME/jre/lib/javaws.jar:$JAVA_HOME/jre/lib/jce.jar:$JAVA_HOME/jre/lib/jsse.jar:$JAVA_HOME/jre/lib/management-agent.jar:$JAVA_HOME/jre/lib/plugin.jar:$JAVA_HOME/jre/lib/resources.jar:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/jre/lib/:$JAVA_HOME/lib/:/usr/lib/hadoop-0.20/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/usr/lib/hadoop-0.20:/usr/lib/hadoop-0.20/hadoop-core-0.20.2-cdh3u2.jar:/usr/lib/hadoop-0.20/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop-0.20/lib/aspectjtools-1.6.5.jar:/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.20/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20/lib/commons-daemon-1.0.1.jar:/usr/lib/hadoop-0.20/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20/lib/commons-logging-1.0.4.jar:/usr/lib/hadoop-0.20/lib/commons-logging-api-1.0.4.jar:/usr/lib/hadoop-0.20/lib/commons-net-1.4.1.jar:/usr/lib/hadoop-0.20/lib/core-3.1.1.jar:/usr/lib/hadoop-0.20/lib/hadoop-fairscheduler-0.20.2-cdh3u2.jar:/usr/lib/hadoop-0.20/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20/lib/jackson-core-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jackson-mapper-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jasper-compiler-5.5.12.jar:/usr/lib/hadoop-0.20/lib/jasper-runtime-5.5.12.jar:/usr/lib/hadoop-0.20/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20/lib/junit-4.5.jar:/usr/lib/hadoop-0.20/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20/lib/log4j-1.2.15.jar:/usr/lib/hadoop-0.20/lib/mockito-all-1.8.2.jar:/usr/lib/hadoop-0.20/lib/oro-2.0.8.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-20081211.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hado

Re: [cloudxy] RE: [Hadoop]Environment variable CLASSPATH not set!

2013-02-09 Thread harryxiyou
On Sun, Feb 10, 2013 at 5:15 AM, John Gordon  wrote:
Hi John,

> Env variables hang off of the session context and are specific to both the
> user profile and their shell-specific preferences.  If your driver is
> loading in kernel mode, it cannot depend on env variables.
>

Our driver has relationships with Libvirt, HDFS and QEMU directly.
And, IIUC, these
are all loading in User Mode. I think GNULIB which called by Libvirt
may change the
Env variables.


> This will be a problem for the other environment variables like hadoop_home.
>
> Instead of using Java directly in kernel mode, I suggest splitting the
> problem:
> 1. fs abstraction for the kernel
>a. Like the nfs filesystem kernel driver implementation for example -- a
> remote mount fs.
>b. use a c impl of the protocol
>   I. To avoid issues, use hadoop 2.0 for protobuffs, since they yield a
> versioned protocol to avoid hangs and dumps when the protocol changes.
>   II.  OR push most of your implementation into a proxy service
>   a. Surface NFS directly, and just use the nfs kernel driver
>   b. Surface your own protocol to be consumed in the kernel mode
> driver.
> 2.  Start hdfs elsewhere, as a independent service in user mode like cups,
> httpd, or xinetd.
> a.  Will have a session and the ability to configure env vars.
>
>
> Not sure if that exactly answers the question, but I hope it was helpful.
>

Absolutely helpful, we will take into account up ways you suggested.
Thanks for your help very much ;-)

-- 
Thanks
Harry Wei


[jira] [Created] (HDFS-4487) Fix diff report and snapshot deletion for file diff

2013-02-09 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-4487:


 Summary: Fix diff report and snapshot deletion for file diff
 Key: HDFS-4487
 URL: https://issues.apache.org/jira/browse/HDFS-4487
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Jing Zhao


Diff report generation and snapshot deletion need to be updated to support file 
diff.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira