[jira] [Created] (HDFS-5126) implement authorized HDFS user impersonation

2013-08-22 Thread Erik.fang (JIRA)
Erik.fang created HDFS-5126:
---

 Summary: implement authorized HDFS user impersonation
 Key: HDFS-5126
 URL: https://issues.apache.org/jira/browse/HDFS-5126
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Reporter: Erik.fang
Priority: Minor


I propose a authorized user impersonate mechanism for fine grain (path level) 
access control in HDFS.
In short, owner of data encrypt the path with a shared secret, and other user 
use the encrypted path to call namenode service (create/read/delete file). 
Namenode decrypt the path to validate the access and execute the operation as 
owner of the data if valid. It consists of:
1. a ACLFileSystem extends DistributedFileSystem, which wrap the 
create/open/delete/etc. RPC calls, and send the encrypted path to namenode
2. authenticator(embedded in namenode), which decrypt the path and execute the 
call as owner of the data

With authorized user impersonate, we can develop a authorization manager to 
check whether a path level access is permitted.
A detailed explanation can be found in maillist:
http://mail-archives.apache.org/mod_mbox/hive-dev/201308.mbox/%3CCACkoVCxm+=44kB_4eWtepHe_knkdm0Uzyh=0q-vfybyu8el...@mail.gmail.com%3E


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #707

2013-08-22 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HDFS-4994. Audit log getContentSummary() calls. Contributed by Robert 
Parker.

--
[...truncated 7673 lines...]
[ERROR] location: package com.google.protobuf
[ERROR] 
:[270,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[281,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10533,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10544,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8357,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8368,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12641,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12652,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9741,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9752,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1781,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1792,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5338,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5349,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6290,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 


Hadoop-Hdfs-0.23-Build - Build # 707 - Still Failing

2013-08-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/707/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7866 lines...]
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3313,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3319,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3330,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3335,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3344,4]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4098,12]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4371,104]
 cannot find symbol
[ERROR] symbol  : method getUnfinishedMessage()
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5264,8]
 getUnknownFields() in 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto 
cannot override getUnknownFields() in com.google.protobuf.GeneratedMessage; 
overridden method is final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5284,19]
 cannot find symbol
[ERROR] symbol  : method 
parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.UnknownFieldSet.Builder,com.google.protobuf.ExtensionRegistryLite,int)
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5314,15]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5317,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5323,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenk

Jenkins build is back to normal : Hadoop-Hdfs-trunk #1499

2013-08-22 Thread Apache Jenkins Server
See 



[Review Request] For HDFS patches

2013-08-22 Thread Vinayakumar B
Hi all,

I have posted patches on following Jiras and all are in Patch available state. 
Some of them are there in that state for quite some time.

Please review these patches and post your comments.  Thanks in advance.

HDFS-5112
HDFS-5031
HDFS-5014
HDFS-4516
HDFS-4223
HDFS-3618
HDFS-3493
HDFS-3405

Regards,
   Vinayakumar B.,


This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!



Re: [VOTE] Release Apache Hadoop 2.0.6-alpha

2013-08-22 Thread Rob Parker

+1 non-binding

verified signatures and checksum
built source
single node cluster tests

Rob

On 08/11/2013 12:46 AM, Konstantin Boudnik wrote:

All,

I have created a release candidate (rc0) for hadoop-2.0.6-alpha that I would
like to release.

This is a stabilization release that includes fixed for a couple a of issues
as outlined on the security list.

The RC is available at: http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc0/
The RC tag in svn is here: 
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc0

The maven artifacts are available via repository.apache.org.

Please try the release bits and vote; the vote will run for the usual 7 days.

Thanks for your voting
   Cos





Re: Implement directory/table level access control in HDFS

2013-08-22 Thread Erik fang
HDFS-5126  has been
created for HDFS user impersonation, and I will develop a prototype in a
few weeks

Thanks,
Erik.fang




On Tue, Aug 20, 2013 at 3:07 PM, Erik fang  wrote:

> Hi folks,
>
>
> HDFS has a POSIX-like permission model, using R,W,X and owner, group,
> other for access control. It is good most of the time, except for:
>
> 1. Data need to be shared among users
>
> group can be used for access control, and the users has to be in the same
> GROUP as the data. the GROUP here stand for the sharing relationship
> between users and data. If many sharing relationships exists, there are
> many groups. It is hard to manage.
>
> 2. Hive
>
> Hive use a table based access control model, user can have SELECT,
>  UPDATE, CREATE, DROP privileges on certain table, which means R/W
> permission in HDFS. However, Hive’s table based authorization doesn’t match
> HDFS’s POSIX-like model. For hive user accessing HDFS, Group permissions
> can be deployed, which introduces many groups, or big groups contains many
> sharing relationship.
>
> Inspired by RDBMS’s way of manage data, a  directory level access control
> based on authorized user impersonate can be implemented as a extension to
> POSIX-like permission model.
>
> it consist of:
>
> 1. ACLFileSystem
>
> 2. authorization manager: hold access control information and a shared
> secret with namenode
>
> 3. authenticator(embedded in namenode)
>
> Take hive as a example, owner of the data is user DW. The procedure is:
>
>  1. user submit a hive query or a hcatalog job to access DW’s data, we
> can get the read table/partition and write table/partition, and the
> corresponding hdfs path. Then a RPC call to authorization manager is
> invoked, send
>
> {user, tablename, table_path, w/r}
>
> 2. authorization manager do a authorization check to find whether it is
> allowed. If allowed, reply a encrypted tablepath:
>
> {realuser, encrypted(tablepath+w/r)}
>
> realuser here stand for the owner of the requested data
>
> 3. ACLFilesystem extends FileSystem and when a open(path) call is invoked
> , it replace the path to encrypted(tablepath+w/r) and invoke the namenode
> RPC call, such as
>
> open(realuser, encrypted(tablepath+w/r), null)
>
> If the user is requesting a partition path, the rpc call can be invoked as
>
> open(realuser, encrypted(tablepath+w/r), path_suffix)
>
> 4. Namenode pick up the RPC call, decrypt the encrypted(hdfspath+w/r) with
> the shared secret to verify whether it is fake. If it is true, check w/r
> operation, join the  tablepath and path_suffix, and invoke the call as
> hdfspath owner, for example user DW.
>
>
> delegation token or something else can be used as the shared secret, and
> authorization manager can be integrated into hive metastore.
>
> In general, I propose a HDFS user impersonate mechanism and a
> authorization mechanism based on HDFS user impersonation.
>
> If the community is interested, I will file a jira for HDFS user
> impersonation and a jira for authorization manager soon.
>
>
> Thoughts?
>
> Thanks a lot
> Erik.fang
>
>


Re: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-08-22 Thread Colin McCabe
On Wed, Aug 21, 2013 at 3:49 PM, Stack  wrote:
> On Wed, Aug 21, 2013 at 1:25 PM, Colin McCabe wrote:
>
>> St.Ack wrote:
>>
>> > + Once I figured where the logs were, found that JAVA_HOME was not being
>> > exported (don't need this in hadoop-2.0.5 for instance).  Adding an
>> > exported JAVA_HOME to my running shell which don't seem right but it took
>> > care of it (I gave up pretty quick on messing w/
>> > yarn.nodemanager.env-whitelist and yarn.nodemanager.admin-env -- I wasn't
>> > getting anywhere)
>>
>> I thought that we were always supposed to have JAVA_HOME set when
>> running any of these commands.  At least, I do.  How else can the
>> system disambiguate between different Java installs?  I need 2
>> installs to test with JDK7.
>>
>>
>
> That is fair enough but I did not need to define this explicitly previously
> (for hadoop-2.0.5-alpha for instance) or the JAVA_HOME that was figured in
> start scripts was propagated and now is not (I have not dug in).
>
>
>
>> > + This did not seem to work for me:
>> > hadoop.security.group.mapping
>> >
>> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback> > lue>.
>>
>> We've seen this before.  I think your problem is that you have
>> java.library.path set correctly (what System.loadLibrary checks), but
>> your system library path does not include a necessary dependency of
>> libhadoop.so-- most likely, libjvm.so.  Probably, we should fix
>> NativeCodeLoader to actually make a function call in libhadoop.so
>> before it declares everything OK.
>>
>
> My expectation was that if native group lookup fails, as it does here, then
> the 'Fallback' would kick in and we'd do the Shell query.  This mechanism
> does not seem to be working.

I filed https://issues.apache.org/jira/browse/HADOOP-9895 to address this issue.

best,
Colin


>
>
> St.Ack


Re: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-08-22 Thread Rob Parker

+1 non-binding

Verified signatures and checksums one binary and source tar files
Built the source
Ran some tests on psuedo-distributed cluster.

Rob

On 08/15/2013 09:15 PM, Arun C Murthy wrote:

Folks,

I've created a release candidate (rc2) for hadoop-2.1.0-beta that I would like 
to get released - this fixes the bugs we saw since the last go-around (rc1).

The RC is available at: 
http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc2/
The RC tag in svn is here: 
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc2

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days.

thanks,
Arun

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/







[jira] [Resolved] (HDFS-5125) TestCreateEditsLog#testCanLoadCreatedEditsLog fails on Windows in globStatus

2013-08-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-5125.
-

Resolution: Duplicate
  Assignee: Binglin Chang

bq. Chris Nauroth, can you verify the patch in HADOOP-9887 can fix this test 
failure?

Ah, you're absolutely right.  HADOOP-9887 fixes it.  I really should have 
thought to try that first, considering I'm the reviewer on HADOOP-9887.  :-)

Thank you for investigating, Binglin, and I hope this didn't waste too much of 
your time.

> TestCreateEditsLog#testCanLoadCreatedEditsLog fails on Windows in globStatus
> 
>
> Key: HDFS-5125
> URL: https://issues.apache.org/jira/browse/HDFS-5125
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Chris Nauroth
>Assignee: Binglin Chang
>
> This test calls the {{CreateEditsLog}} tool, then runs {{globStatus}} on the 
> local file system to find the resulting files before moving them to a 
> directory where a NameNode can start up and read them.  The HADOOP-9877 patch 
> has caused this test to start failing on Windows due to internal calls to 
> {{FileContext#getFileLinkStatus}} rejecting the {{Path}} arguments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


RE: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-08-22 Thread Bikas Saha
+1 (non-binding)

Bikas

-Original Message-
From: Hitesh Shah [mailto:hit...@apache.org]
Sent: Wednesday, August 21, 2013 4:25 PM
To: yarn-...@hadoop.apache.org; Arun Murthy
Cc: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org;
mapreduce-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.1.0-beta

+1 (binding)

Verified checksums, built from source and ran basic MR jobs on a
single-node cluster.

-- Hitesh

On Aug 15, 2013, at 2:15 PM, Arun C Murthy wrote:

> Folks,
>
> I've created a release candidate (rc2) for hadoop-2.1.0-beta that I
would like to get released - this fixes the bugs we saw since the last
go-around (rc1).
>
> The RC is available at:
> http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc2/
> The RC tag in svn is here:
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-
> rc2
>
> The maven artifacts are available via repository.apache.org.
>
> Please try the release and vote; the vote will run for the usual 7 days.
>
> thanks,
> Arun
>
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or
> entity to which it is addressed and may contain information that is
> confidential, privileged and exempt from disclosure under applicable
> law. If the reader of this message is not the intended recipient, you
> are hereby notified that any printing, copying, dissemination,
> distribution, disclosure or forwarding of this communication is
> strictly prohibited. If you have received this communication in error,
> please contact the sender immediately and delete it from your system.
Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-08-22 Thread Steve Loughran
+1, binding

Review process

# symlink /usr/local/bin/protoc to the homebrew installed 2.5.0 version

# delete all 2.1.0-beta artifacts in the mvn repo:

  find ~/.m2 -name 2.1.0-beta -print | xargs rm -rf

# checkout hbase apache/branch-0.95 (commit # b58d596 )

# switch to ASF repo (arun's private repo is in the POM, with JARs with the
same sha1 sum, I'm just being rigorous)

  ASF Staging
  https://repository.apache.org/content/groups/staging/


# clean build of hbase tar against the beta artifacts

mvn clean install assembly:single -DskipTests -Dmaven.javadoc.skip=true
-Dhadoop.profile=2.0 -Dhadoop-two.version=2.1.0-beta

# Observe DL taking place

]
[INFO] --- maven-assembly-plugin:2.4:single (default-cli) @ hbase ---
[INFO] Assemblies have been skipped per configuration of the skipAssembly
parameter.
[INFO]

[INFO]

[INFO] Building HBase - Common 0.95.3-SNAPSHOT
[INFO]

Downloading:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-annotations/2.1.0-beta/hadoop-annotations-2.1.0-beta.pom
Downloaded:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-annotations/2.1.0-beta/hadoop-annotations-2.1.0-beta.pom(2
KB at 3.3 KB/sec)
Downloading:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-project/2.1.0-beta/hadoop-project-2.1.0-beta.pom
Downloaded:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-project/2.1.0-beta/hadoop-project-2.1.0-beta.pom(34
KB at 383.8 KB/sec)
Downloading:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-main/2.1.0-beta/hadoop-main-2.1.0-beta.pom
Downloaded:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-main/2.1.0-beta/hadoop-main-2.1.0-beta.pom(17
KB at 184.0 KB/sec)
Downloading:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-common/2.1.0-beta/hadoop-common-2.1.0-beta.pom
Downloaded:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-common/2.1.0-beta/hadoop-common-2.1.0-beta.pom(26
KB at 293.3 KB/sec)
Downloading:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-project-dist/2.1.0-beta/hadoop-project-dist-2.1.0-beta.pom
Downloaded:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-project-dist/2.1.0-beta/hadoop-project-dist-2.1.0-beta.pom(17
KB at 176.2 KB/sec)
Downloading:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-auth/2.1.0-beta/hadoop-auth-2.1.0-beta.pom
Downloaded:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-auth/2.1.0-beta/hadoop-auth-2.1.0-beta.pom(7
KB at 79.0 KB/sec)
Downloading:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-annotations/2.1.0-beta/hadoop-annotations-2.1.0-beta.jar
Downloading:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-common/2.1.0-beta/hadoop-common-2.1.0-beta.jar
Downloading:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-auth/2.1.0-beta/hadoop-auth-2.1.0-beta.jar
Downloaded:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-annotations/2.1.0-beta/hadoop-annotations-2.1.0-beta.jar(17
KB at 146.1 KB/sec)
Downloaded:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-auth/2.1.0-beta/hadoop-auth-2.1.0-beta.jar(47
KB at 241.7 KB/sec)
Downloaded:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-common/2.1.0-beta/hadoop-common-2.1.0-beta.jar(2657
KB at 2260.9 KB/sec)
[INFO]


# get md5 sum of hadoop-common-2.1.0-beta artifact in
https://repository.apache.org/content/groups/staging/ :
0166f5c94d3699b3a37efc16ebb1ceea3acb3b53

# verify version of artifact in local m2 repo
$ sha1sum
~/.m2/repository/org/apache/hadoop/hadoop-common/2.1.0-beta/hadoop-common-2.1.0-beta.jar
0166f5c94d3699b3a37efc16ebb1ceea3acb3b53

# in hbase/hbase-assembly/target , gunzip then untar the
hbase-0.95.3-SNAPSHOT-bin.tar file

# Patch the Hoya POM to use 2.1.0-beta instead of a local 2.1.1-SNAPSHOT

# run some of the hbase cluster deploy & flexing tests


 mvn clean test  -Pstaging

 (all tests pass after 20 min)

== functional tests =

# build and the Hoya JAR with classpath

mvn package -Pstaging

# D/L the binary .tar.gz file, and scp to an ubuntu VM with the hadoop conf
properties for net-accessible HDFS & YARN services, no memory limits on
containers

https://github.com/hortonworks/hoya/tree/master/src/test/configs/ubuntu
https://github.com/hortonworks/hoya/blob/master/src/test/configs/ubuntu/core-site.xml
https://github.com/hortonworks/hoya/blob/master/src/test/configs/ubuntu/yarn-site.xml

# stop the running hadoop-2.1.1-snapshot cluster

# start the new cluster serv

[jira] [Resolved] (HDFS-5052) Add cacheRequest/uncacheRequest support to NameNode

2013-08-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-5052.


Resolution: Fixed

committed

> Add cacheRequest/uncacheRequest support to NameNode
> ---
>
> Key: HDFS-5052
> URL: https://issues.apache.org/jira/browse/HDFS-5052
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5052.003.patch, HDFS-5052.004.patch, 
> HDFS-5052-caching.002.patch, HDFS-5052-caching.004.patch, 
> HDFS-5052-caching.005.patch
>
>
> Add cacheRequest/uncacheRequest/listCacheRequest support to DFSAdmin and 
> NameNode.  Maintain a list of active CachingRequests on the NameNode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Release Apache Hadoop 2.0.6-alpha (RC1)

2013-08-22 Thread Konstantin Boudnik
I am clearly +1 (non-binding) on the release.

With 8 +1s (5 binding), no -1s or 0s the vote passes.

Thanks to all who verified the bits, I'll push them out shortly.

Thanks,
  Cos

On Thu, Aug 15, 2013 at 10:29PM, Konstantin Boudnik wrote:
> All,
> 
> I have created a release candidate (rc1) for hadoop-2.0.6-alpha that I would
> like to release.
> 
> This is a stabilization release that includes fixed for a couple a of issues
> as outlined on the security list.
> 
> The RC is available at: http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc1/
> The RC tag in svn is here: 
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc1
> 
> The maven artifacts are available via repository.apache.org.
> 
> The only difference between rc0 and rc1 is ASL added to releasenotes.html and
> updated release dates in CHANGES.txt files.
> 
> Please try the release bits and vote; the vote will run for the usual 7 days.
> 
> Thanks for your voting
>   Cos
> 




signature.asc
Description: Digital signature