[jira] [Created] (HDFS-2647) Enable protobuf RPC for InterDatanodeProtocol, ClientDatanodeProtocol, JournalProtocol and NamenodeProtocol

2011-12-09 Thread Suresh Srinivas (Created) (JIRA)
Enable protobuf RPC for InterDatanodeProtocol, ClientDatanodeProtocol, 
JournalProtocol and NamenodeProtocol
---

 Key: HDFS-2647
 URL: https://issues.apache.org/jira/browse/HDFS-2647
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer, data-node, hdfs client, name-node
Affects Versions: 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-2647.txt

This jira changes the client and servers to use protobuf based RPC for the 
protocols mentioned in the Summary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2648) TestDelegationToken fails

2011-12-09 Thread Tsz Wo (Nicholas), SZE (Created) (JIRA)
TestDelegationToken fails
-

 Key: HDFS-2648
 URL: https://issues.apache.org/jira/browse/HDFS-2648
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE


Running org.apache.hadoop.hdfs.security.TestDelegationToken
Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 15.736 sec <<< 
FAILURE!

Results :

Tests in error: 
  
testDelegationTokenWithDoAs(org.apache.hadoop.hdfs.security.TestDelegationToken):
 Illegal principal name JobTracker/foo@foo.com

Tests run: 5, Failures: 0, Errors: 1, Skipped: 0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Jenkins build is still unstable: Hadoop-Hdfs-0.23-Build #102

2011-12-09 Thread Apache Jenkins Server
See 




Hadoop-Hdfs-0.23-Build - Build # 102 - Still Unstable

2011-12-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/102/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 14516 lines...]
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.1-SNAPSHOT/hadoop-hdfs-project-0.23.1-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.7:jar (module-javadocs) @ hadoop-hdfs-project 
---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [6:53.841s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [40.662s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.059s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 7:35.011s
[INFO] Finished at: Fri Dec 09 11:41:08 UTC 2011
[INFO] Final Memory: 69M/757M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Publishing Clover coverage report...
Publishing Clover HTML report...
Publishing Clover XML report...
Publishing Clover coverage results...
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating HADOOP-7758
Updating HADOOP-7870
Updating HADOOP-6886
Updating HDFS-2594
Updating HADOOP-6840
Updating HDFS-2178
Updating MAPREDUCE-3513
Updating MAPREDUCE-3327
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation

Error Message:
expected:<401> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<401> but was:<200>
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:283)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:195)
at junit.framework.Assert.assertEquals(Assert.java:201)
at 
org.apache.hadoop.fs.http.server.TestHttpFSServer.__CLR3_0_2bnqkmt1wc(TestHttpFSServer.java:97)
at 
org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation(TestHttpFSServer.java:86)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.apache.hadoop.test.TestHdfsHelper$HdfsStatement.evaluate(TestHdfsHelper.java:73)
at 
org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:108)
at 
org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:108)
at 
org.apache.hadoop.test.TestJ

Hadoop-Hdfs-trunk - Build # 889 - Still Unstable

2011-12-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/889/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12777 lines...]
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS Project 0.24.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.7:jar (module-javadocs) @ hadoop-hdfs-project 
---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:48:51.918s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [36.121s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.078s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 1:49:28.542s
[INFO] Finished at: Fri Dec 09 13:22:53 UTC 2011
[INFO] Final Memory: 56M/767M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating HADOOP-7897
Updating HADOOP-7870
Updating HADOOP-6886
Updating HDFS-2594
Updating HADOOP-6840
Updating HDFS-2178
Updating MAPREDUCE-3513
Updating HADOOP-7851
Updating MAPREDUCE-3327
Updating HADOOP-7898
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
5 tests failed.
FAILED:  org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation

Error Message:
expected:<401> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<401> but was:<200>
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:283)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:195)
at junit.framework.Assert.assertEquals(Assert.java:201)
at 
org.apache.hadoop.fs.http.server.TestHttpFSServer.__CLR3_0_2bnqkmt1wc(TestHttpFSServer.java:97)
at 
org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation(TestHttpFSServer.java:86)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.apache.hadoop.test.TestHdfsHelper$HdfsStatement.evaluate(TestHdfsHelper.java:73)
at 
org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:108)
at 
org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:108)
at 
org.apache.hadoop.test.TestJettyHelper$1.evalua

Jenkins build is still unstable: Hadoop-Hdfs-trunk #889

2011-12-09 Thread Apache Jenkins Server
See 




[jira] [Created] (HDFS-2649) eclipse:eclipse build fails for hadoop-hdfs-httpfs

2011-12-09 Thread Jason Lowe (Created) (JIRA)
eclipse:eclipse build fails for hadoop-hdfs-httpfs
--

 Key: HDFS-2649
 URL: https://issues.apache.org/jira/browse/HDFS-2649
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.24.0
Reporter: Jason Lowe


Building the eclipse:eclipse target fails in the hadoop-hdfs-httpfs project 
with this error:

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-eclipse-plugin:2.8:eclipse (default-cli) on 
project hadoop-hdfs-httpfs: Request to merge when 'filtering' is not identical. 
Original=resource src/main/resources: output=target/classes, 
include=[httpfs.properties], exclude=[**/*.java], test=false, filtering=true, 
merging with=resource src/main/resources: output=target/classes, include=[], 
exclude=[httpfs.properties|**/*.java], test=false, filtering=false -> [Help 1]

This appears to be the same type of issue as reported in HADOOP-7567.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: how to compile proto files?

2011-12-09 Thread Alejandro Abdelnur
Suresh,

One proto file at the time works, but doing *.proto fails complaining about
duplicate definitions.

Thxs.

Alejandro

On Thu, Dec 8, 2011 at 11:46 PM, Suresh Srinivas wrote:

> Under hadoop-hdfs/src I run:
> Protoc -I=proto proto/.proto --java_out main/src
>
>
>
> On Thursday, December 8, 2011, Alejandro Abdelnur 
> wrote:
> > I'm trying to change the build to compile the proto files (instead
> checking
> > in the generated java files)
> >
> > However, I'm not able to run protoc successfully.
> >
> > Tried
> >
> > $ protoc -Isrc/proto/ --java_out=target/generated-sources/java
> > src/proto/*.proto
> >
> > but it does not work.
> >
> > Thanks.
> >
> > Alejandro
> >
>


Re: how to compile proto files?

2011-12-09 Thread Suresh Srinivas
I missed that. I do not if you can do that. BTW yarn generates files using
protoc. It might be a good idea to look at their build.

I have a jira opened to address this issue, HDFS-2578. Feel free to assign
it to yourself, if you want to work on it.

On Fri, Dec 9, 2011 at 10:14 AM, Alejandro Abdelnur wrote:

> Suresh,
>
> One proto file at the time works, but doing *.proto fails complaining about
> duplicate definitions.
>
> Thxs.
>
> Alejandro
>
> On Thu, Dec 8, 2011 at 11:46 PM, Suresh Srinivas  >wrote:
>
> > Under hadoop-hdfs/src I run:
> > Protoc -I=proto proto/.proto --java_out main/src
> >
> >
> >
> > On Thursday, December 8, 2011, Alejandro Abdelnur 
> > wrote:
> > > I'm trying to change the build to compile the proto files (instead
> > checking
> > > in the generated java files)
> > >
> > > However, I'm not able to run protoc successfully.
> > >
> > > Tried
> > >
> > > $ protoc -Isrc/proto/ --java_out=target/generated-sources/java
> > > src/proto/*.proto
> > >
> > > but it does not work.
> > >
> > > Thanks.
> > >
> > > Alejandro
> > >
> >
>


[jira] [Created] (HDFS-2650) Replace @inheritDoc with @Override

2011-12-09 Thread Hari Mankude (Created) (JIRA)
Replace @inheritDoc with @Override 
---

 Key: HDFS-2650
 URL: https://issues.apache.org/jira/browse/HDFS-2650
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Fix For: 0.24.0


@Override provides both javadoc from superclass and compile time detection of 
overridden method deletion from superclass.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2651) ClientNameNodeProtocol Translators for Protocol Buffers

2011-12-09 Thread Sanjay Radia (Created) (JIRA)
ClientNameNodeProtocol Translators for Protocol Buffers
---

 Key: HDFS-2651
 URL: https://issues.apache.org/jira/browse/HDFS-2651
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: protoClientNNTranslators1.patch



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2578) Use protoc to generate protobuf artifacts instead of checking it in.

2011-12-09 Thread Alejandro Abdelnur (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur resolved HDFS-2578.
--

Resolution: Duplicate

This is duplicate of HDFS-2511.

Regarding Doug's comment, I'll follow up with a JIRA to check the protoc 
version in the base POM

> Use protoc to generate protobuf artifacts instead of checking it in.
> 
>
> Key: HDFS-2578
> URL: https://issues.apache.org/jira/browse/HDFS-2578
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.24.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2652) Port token service changes from 205

2011-12-09 Thread Daryn Sharp (Created) (JIRA)
Port token service changes from 205
---

 Key: HDFS-2652
 URL: https://issues.apache.org/jira/browse/HDFS-2652
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.24.0, 0.23.1
Reporter: Daryn Sharp
Assignee: Daryn Sharp


Need to merge the 205 token bug fixes and the feature to enable hostname-based 
tokens.  See jiras linked to HADOOP-7808 for more details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




RE: Review HDFS-1765

2011-12-09 Thread Uma Maheswara Rao G
Thanks a lot for the Nice reviews, Eli!
I have updated the patch by fixing the comments. Whenever you get the time, 
please take a look.

Thanks
Uma

From: Uma Maheswara Rao G
Sent: Thursday, December 08, 2011 12:52 AM
To: hdfs-dev@hadoop.apache.org
Subject: RE: Review HDFS-1765

Thank you very much, Eli.

Regards,
Uma

From: Eli Collins [e...@cloudera.com]
Sent: Thursday, December 08, 2011 12:45 AM
To: hdfs-dev@hadoop.apache.org
Subject: Re: Review HDFS-1765

I'll take a look.

On Wed, Dec 7, 2011 at 11:12 AM, Uma Maheswara Rao G
 wrote:
> Hi,
>
>  If you have some free time, can you please review HDFS-1765?
>
> Thanks in advance.
>
> Thanks & Regards,
> Uma


[jira] [Created] (HDFS-2653) DFSClient should cache whether addrs are non-local when short-circuiting is enabled

2011-12-09 Thread Eli Collins (Created) (JIRA)
DFSClient should cache whether addrs are non-local when short-circuiting is 
enabled
---

 Key: HDFS-2653
 URL: https://issues.apache.org/jira/browse/HDFS-2653
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.1, 1.0.0
Reporter: Eli Collins
Assignee: Eli Collins


Something Todd mentioned to me off-line.. currently DFSClient doesn't cache the 
fact that non-local reads are non-local, so if short-circuiting is enabled 
every time we create a block reader we'll go through the isLocalAddress code 
path. We should cache the fact that an addr is non-local as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2654) Make BlockReaderLocal not extend RemoteBlockReader2

2011-12-09 Thread Eli Collins (Created) (JIRA)
Make BlockReaderLocal not extend RemoteBlockReader2
---

 Key: HDFS-2654
 URL: https://issues.apache.org/jira/browse/HDFS-2654
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.1, 1.0.0
Reporter: Eli Collins
Assignee: Eli Collins


The BlockReaderLocal code paths are easier to understand (especially true on 
branch-1 where BlockReaderLocal inherits code from BlockerReader and 
FSInputChecker) if the local and remote block reader implementations are 
independent, and they're not really sharing much code anyway. If for some 
reason they start to share sifnificant code we can make the BlockReader 
interface an abstract class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2655) BlockReaderLocal#skip performs unnecessary IO

2011-12-09 Thread Eli Collins (Created) (JIRA)
BlockReaderLocal#skip performs unnecessary IO
-

 Key: HDFS-2655
 URL: https://issues.apache.org/jira/browse/HDFS-2655
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.1
Reporter: Eli Collins


Per HDFS-2654 BlockReaderLocal#skip performs the skip by reading the data so we 
stay in sync with checksums. This could be implemented more efficiently in the 
future to skip to the beginning of the appropriate checksum chunk and then only 
read to the middle of that chunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira