[jira] [Reopened] (HADOOP-8249) invalid hadoop-auth cookies should trigger authentication if info is avail before returning HTTP 401

2012-04-07 Thread Alejandro Abdelnur (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reopened HADOOP-8249:



we need backport for hadoop 1

> invalid hadoop-auth cookies should trigger authentication if info is avail 
> before returning HTTP 401
> 
>
> Key: HADOOP-8249
> URL: https://issues.apache.org/jira/browse/HADOOP-8249
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.1, 2.0.0
>Reporter: bc Wong
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HADOOP-8249.patch, HDFS-3198_branch-1.patch
>
>
> WebHdfs gives out cookies. But when the client passes them back, it'd 
> sometimes reject them and return a HTTP 401 instead. ("Sometimes" as in after 
> a restart.) The interesting thing is that if the client doesn't pass the 
> cookie back, WebHdfs will be totally happy.
> The correct behaviour should be to ignore the cookie if it looks invalid, and 
> attempt to proceed with the request handling.
> I haven't tried HttpFs to see whether it handles restart better.
> Reproducing it with curl:
> {noformat}
> 
> ## Initial curl. Storing cookie to file.
> 
> [root@vbox2 ~]# curl -c /tmp/webhdfs.cookie -i 
> 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS&user.name=bcwalrus'
> HTTP/1.1 200 OK
> Content-Type: application/json
> Expires: Thu, 01-Jan-1970 00:00:00 GMT
> Set-Cookie: 
> hadoop.auth="u=bcwalrus&p=bcwalrus&t=simple&e=1333614686366&s=z2w5xpFlufnnEoOHxVRiXqxwtqM=";Path=/
> Content-Length: 597
> Server: Jetty(6.1.26)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1333577906198,"owner":"mapred","pathSuffix":"tmp","permission":"1777","replication":0,"type":"DIRECTORY"},
> {"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1333577511848,"owner":"hdfs","pathSuffix":"user","permission":"1777","replication":0,"type":"DIRECTORY"},
> {"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1333428745116,"owner":"mapred","pathSuffix":"var","permission":"755","replication":0,"type":"DIRECTORY"}
> ]}}
> 
> ## Another curl. Using the cookie jar.
> 
> [root@vbox2 ~]# curl -b /tmp/webhdfs.cookie -i 
> 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS&user.name=bcwalrus'
> HTTP/1.1 200 OK
> Content-Type: application/json
> Content-Length: 597
> Server: Jetty(6.1.26)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1333577906198,"owner":"mapred","pathSuffix":"tmp","permission":"1777","replication":0,"type":"DIRECTORY"},
> {"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1333577511848,"owner":"hdfs","pathSuffix":"user","permission":"1777","replication":0,"type":"DIRECTORY"},
> {"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1333428745116,"owner":"mapred","pathSuffix":"var","permission":"755","replication":0,"type":"DIRECTORY"}
> ]}}
> 
> ## Restart NN.
> 
> [root@vbox2 ~]# /etc/init.d/hadoop-hdfs-namenode restartStopping Hadoop 
> namenode:  [  OK  ]
> stopping namenode
> Starting Hadoop namenode:  [  OK  ]
> starting namenode, logging to 
> /var/log/hadoop-hdfs/hadoop-hdfs-namenode-vbox2.out
> 
> ## Curl using cookie jar gives error.
> 
> [root@vbox2 ~]# curl -b /tmp/webhdfs.cookie -i 
> 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS&user.name=bcwalrus'
> HTTP/1.1 401 org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signature
> Content-Type: text/html; charset=iso-8859-1
> Set-Cookie: hadoop.auth=;Path=/;Expires=Thu, 01-Jan-1970 00:00:00 GMT
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Length: 1520
> Server: Jetty(6.1.26)
> 
> 
> 
> Error 401 
> org.apache.hadoop.security.authentication.util.SignerException: Invalid 
> signature
> 
> HTTP ERROR 401
> Problem accessing /webhdfs/v1/. Reason:
> org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signaturePowered by 
> Jetty://
> ...
> 
> ## Curl without cookie jar is ok.
> ##

Build failed in Jenkins: Hadoop-Common-0.23-Build #215

2012-04-07 Thread Apache Jenkins Server
See 

Changes:

[bobby] HADOOP-8014. ViewFileSystem does not correctly implement 
getDefaultBlockSize, getDefaultReplication, getContentSummary

[acmurthy] Added toString to ContainerToken. Contributed by Jason Lowe.

[bobby] svn merge -c 1310528 from trunk. FIXES: MAPREDUCE-4051. Remove the 
empty hadoop-mapreduce-project/assembly/all.xml file (Ravi Prakash via bobby)

[bobby] svn merge -c 1310507 from trunk. FIXES: HADOOP-8014. ViewFileSystem 
does not correctly implement getDefaultBlockSize, getDefaultReplication, 
getContentSummary (John George via bobby)

--
[...truncated 12308 lines...]
  [javadoc] Loading source files for package org.apache.hadoop.fs.shell...
  [javadoc] Loading source files for package org.apache.hadoop.fs.viewfs...
  [javadoc] Loading source files for package org.apache.hadoop.http...
  [javadoc] Loading source files for package org.apache.hadoop.http.lib...
  [javadoc] Loading source files for package org.apache.hadoop.io...
  [javadoc] Loading source files for package org.apache.hadoop.io.compress...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.bzip2...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.lz4...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.snappy...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.zlib...
  [javadoc] Loading source files for package org.apache.hadoop.io.file.tfile...
  [javadoc] Loading source files for package org.apache.hadoop.io.nativeio...
  [javadoc] Loading source files for package org.apache.hadoop.io.retry...
  [javadoc] Loading source files for package org.apache.hadoop.io.serializer...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.serializer.avro...
  [javadoc] Loading source files for package org.apache.hadoop.ipc...
  [javadoc] Loading source files for package org.apache.hadoop.ipc.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.jmx...
  [javadoc] Loading source files for package org.apache.hadoop.log...
  [javadoc] Loading source files for package org.apache.hadoop.log.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.file...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics.ganglia...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.jvm...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.spi...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.util...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.annotation...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.filter...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.impl...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.lib...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.sink...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.sink.ganglia...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.source...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.util...
  [javadoc] Loading source files for package org.apache.hadoop.net...
  [javadoc] Loading source files for package org.apache.hadoop.record...
  [javadoc] Loading source files for package 
org.apache.hadoop.record.compiler...
  [javadoc] Loading source files for package 
org.apache.hadoop.record.compiler.ant...
  [javadoc] Loading source files for package 
org.apache.hadoop.record.compiler.generated...
  [javadoc] Loading source files for package org.apache.hadoop.record.meta...
  [javadoc] Loading source files for package org.apache.hadoop.security...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.authorize...
  [javadoc] Loading source files for package org.apache.hadoop.security.token...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.token.delegation...
  [javadoc] Loading source files for package org.apache.hadoop.tools...
  [javadoc] Loading source files for package org.apache.hadoop.util...
  [javadoc] Loading source files for package org.apache.hadoop.util.bloom...
  [javadoc] Loading source files for package org.apache.hadoop.util.hash...
  [javadoc] 2 errors
 [xslt] Processing 

 to 

 [xslt] Loading stylesheet 
/home/jenkins/tools/findbugs/latest/src/xsl/default.xsl
[INFO] Executed tasks
[INFO] 
[

Build failed in Jenkins: Hadoop-Common-trunk #369

2012-04-07 Thread Apache Jenkins Server
See 

Changes:

[szetszwo] HDFS-3211. Add fence(..) and replace NamenodeRegistration with 
JournalInfo and epoch in JournalProtocol.  Contributed by suresh

[todd] HDFS-3226. Allow GetConf tool to print arbitrary keys. Contributed by 
Todd Lipcon.

[suresh] HDFS-3121. Add HDFS tests for HADOOP-8014 change. Contributed by John 
George. Missed adding the file in the earlier commit.

[suresh] HDFS-3121. Add HDFS tests for HADOOP-8014 change. Contributed by John 
George.

[bobby] HADOOP-8014. ViewFileSystem does not correctly implement 
getDefaultBlockSize, getDefaultReplication, getContentSummary

[bobby] MAPREDUCE-4110. Fix tests in TestMiniMRClasspath & 
TestMiniMRWithDFSWithDistinctUsers (Devaraj K via bobby)

[acmurthy] Added toString to ContainerToken. Contributed by Jason Lowe.

[bobby] MAPREDUCE-4051. Remove the empty 
hadoop-mapreduce-project/assembly/all.xml file (Ravi Prakash via bobby)

[szetszwo] HDFS-2505. Add a test to verify getFileChecksum(..) with ViewFS.  
Contributed by Ravi Prakash

[bobby] HADOOP-8014. ViewFileSystem does not correctly implement 
getDefaultBlockSize, getDefaultReplication, getContentSummary (John George via 
bobby)

[suresh] HDFS-3136. Remove SLF4J dependency as HDFS does not need it to fix 
unnecessary warnings. Contributed by Jason Lowe.

[bobby] MAPREDUCE-4111. Fix tests in org.apache.hadoop.mapred.TestJobName 
(Devaraj K via bobby)

[bobby] MAPREDUCE-4112. Fix tests 
org.apache.hadoop.mapred.TestClusterMapReduceTestCase (Devaraj K via bobby)

[bobby] MAPREDUCE-4113. Fix tests 
org.apache.hadoop.mapred.TestClusterMRNotification (Devaraj K via bobby)

--
[...truncated 45071 lines...]
[DEBUG]   (f) reactorProjects = [MavenProject: 
org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT @ 

 MavenProject: org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT @ 

 MavenProject: org.apache.hadoop:hadoop-auth-examples:3.0.0-SNAPSHOT @ 

 MavenProject: org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT @ 

 MavenProject: org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 

[DEBUG]   (f) useDefaultExcludes = true
[DEBUG]   (f) useDefaultManifestFile = false
[DEBUG] -- end configuration --
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-common-project ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-enforcer-plugin:1.0:enforce from plugin realm 
ClassRealm[plugin>org.apache.maven.plugins:maven-enforcer-plugin:1.0, parent: 
sun.misc.Launcher$AppClassLoader@126b249]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-enforcer-plugin:1.0:enforce' with basic 
configurator -->
[DEBUG]   (s) fail = true
[DEBUG]   (s) failFast = false
[DEBUG]   (f) ignoreCache = false
[DEBUG]   (s) project = MavenProject: 
org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 

[DEBUG]   (s) version = [3.0.2,)
[DEBUG]   (s) version = 1.6
[DEBUG]   (s) rules = 
[org.apache.maven.plugins.enforcer.RequireMavenVersion@10277dc, 
org.apache.maven.plugins.enforcer.RequireJavaVersion@10b218c]
[DEBUG]   (s) session = org.apache.maven.execution.MavenSession@2e242b
[DEBUG]   (s) skip = false
[DEBUG] -- end configuration --
[DEBUG] Executing rule: org.apache.maven.plugins.enforcer.RequireMavenVersion
[DEBUG] Rule org.apache.maven.plugins.enforcer.RequireMavenVersion is cacheable.
[DEBUG] Key org.apache.maven.plugins.enforcer.RequireMavenVersion -937312197 
was found in the cache
[DEBUG] The cached results are still valid. Skipping the rule: 
org.apache.maven.plugins.enforcer.RequireMavenVersion
[DEBUG] Executing rule: org.apache.maven.plugins.enforcer.RequireJavaVersion
[DEBUG] Rule org.apache.maven.plugins.enforcer.RequireJavaVersion is cacheable.
[DEBUG] Key org.apache.maven.plugins.enforcer.RequireJavaVersion 48569 was 
found in the cache
[DEBUG] The cached results are still valid. Skipping the rule: 
org.apache.maven.plugins.enforcer.RequireJavaVersion
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-common-project ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-site-plugin:3.0:attach-descriptor from plugin 
realm ClassRealm[plugin>org.apache.maven.plugins:maven-site-plugin:3.0, parent: 
sun.misc.Launcher$AppClassLoader@126b249]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-site-plugin:3.0:attach-descript

what does mackmode signify

2012-04-07 Thread Ranjan Banerjee
Hello, 
   I am going through the code for the fair scheduler in order to make some 
changes in it. I came across a variable called mockmode. Can someone say what 
does it signify?

Regards,
Ranjan


Re: what does mackmode signify

2012-04-07 Thread Harsh J
It is a toggle property for unit tests to enable, in order to disable
a few things due to which unit testing can get easy. Basically appears
like the allocations updater thread isn't invoked during testing, if
mockMode is enabled.

Also see TestFairScheduler.java, in which the mockMode is enabled via
"scheduler = new FairScheduler(clock, true);" for the scheduler object
the test uses. In real use, this mode is never entered into.

P.s. Since you mentioned you're changing a few things, you also ought
to know that FairScheduler is being ported over to Hadoop 2.x
currently. See https://issues.apache.org/jira/browse/MAPREDUCE-3451
for more.

On Sun, Apr 8, 2012 at 2:17 AM, Ranjan Banerjee  wrote:
> Hello,
>    I am going through the code for the fair scheduler in order to make some 
> changes in it. I came across a variable called mockmode. Can someone say what 
> does it signify?
>
> Regards,
> Ranjan



-- 
Harsh J


Re: what does mackmode signify

2012-04-07 Thread Ranjan Banerjee
Thanks Harsh J.

On 04/07/12, Harsh J   wrote:
> It is a toggle property for unit tests to enable, in order to disable
> a few things due to which unit testing can get easy. Basically appears
> like the allocations updater thread isn't invoked during testing, if
> mockMode is enabled.
> 
> Also see TestFairScheduler.java, in which the mockMode is enabled via
> "scheduler = new FairScheduler(clock, true);" for the scheduler object
> the test uses. In real use, this mode is never entered into.
> 
> P.s. Since you mentioned you're changing a few things, you also ought
> to know that FairScheduler is being ported over to Hadoop 2.x
> currently. See https://issues.apache.org/jira/browse/MAPREDUCE-3451
> for more.
> 
> On Sun, Apr 8, 2012 at 2:17 AM, Ranjan Banerjee  wrote:
> > Hello,
> >    I am going through the code for the fair scheduler in order to make some 
> > changes in it. I came across a variable called mockmode. Can someone say 
> > what does it signify?
> >
> > Regards,
> > Ranjan
> 
> 
> 
> -- 
> Harsh J



[jira] [Resolved] (HADOOP-8260) Auto-HA: Replace ClientBaseWithFixes with our own modified copy of the class

2012-04-07 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HADOOP-8260.
-

   Resolution: Fixed
Fix Version/s: Auto Failover (HDFS-3042)
 Hadoop Flags: Reviewed

Committed to branch. I ran all of the tests which inherit from this class 
before committing.

> Auto-HA: Replace ClientBaseWithFixes with our own modified copy of the class
> 
>
> Key: HADOOP-8260
> URL: https://issues.apache.org/jira/browse/HADOOP-8260
> Project: Hadoop Common
>  Issue Type: Test
>  Components: auto-failover, test
>Affects Versions: Auto Failover (HDFS-3042)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Fix For: Auto Failover (HDFS-3042)
>
> Attachments: hadoop-8260.txt
>
>
> The class ClientBaseWithFixes is an attempt to add some workaround code to 
> avoid spurious failures due to ZOOKEEPER-1438. But, even after making those 
> workarounds, I've seen a few Jenkins failures due to that issue. Until ZK 
> fixes this issue, I'd like to just copy the test infrastructure into our own 
> code, and remove the problematic JMXEnv verifications.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8246) Auto-HA: automatically scope znode by nameservice ID

2012-04-07 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HADOOP-8246.
-

   Resolution: Fixed
Fix Version/s: Auto Failover (HDFS-3042)
 Hadoop Flags: Reviewed

Thanks for the quick review. I ran all the ZKFC tests and committed this to the 
branch.

> Auto-HA: automatically scope znode by nameservice ID
> 
>
> Key: HADOOP-8246
> URL: https://issues.apache.org/jira/browse/HADOOP-8246
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auto-failover, ha
>Affects Versions: Auto Failover (HDFS-3042)
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: Auto Failover (HDFS-3042)
>
> Attachments: hadoop-8246.txt
>
>
> Talking to some folks who work on operations/deployment, they pointed out 
> that it would make sense to automatically include the nameservice ID in the 
> base znode used for automatic failover. For example, even though the "root 
> znode" is "/hadoop-ha", we should put the znodes for a nameservice 
> "my-ha-cluster" within "/hadoop-ha/my-ha-cluster". This allows federated 
> setups to work with no additional configuration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira