[jira] [Created] (HADOOP-9717) Add retry flag/retry attempt count to the RPC requests

2013-07-10 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HADOOP-9717:
---

 Summary: Add retry flag/retry attempt count to the RPC requests
 Key: HADOOP-9717
 URL: https://issues.apache.org/jira/browse/HADOOP-9717
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Suresh Srinivas


RetryCache lookup on server side implementation can be optimized if Rpc request 
indicates if the request is being retried. This jira proposes adding an 
optional field to Rpc request that indicates if request is being retried.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: protobuf 2.5.0 failure should be known by wiki

2013-07-10 Thread Steve Loughran
create a wiki account then ask for write access & we'll set you up

On 10 July 2013 01:56, Akira AJISAKA  wrote:

> Hi,
>
> I've installed ProtocolBuffer 2.5.0 according to [[wiki:
> HowToContribute]]. And that's why I failed to build hadoop and I had to
> downgrade protobuf to 2.4.1.
>
> Now I know HADOOP-9346 and HADOOP-9440 for enabling protobuf 2.5.0, but
> these issues have been left for months. So, it should be commented to
> wiki that protobuf 2.5.0 fails to build and we recommend using protobuf
> 2.4.1.
>
> I tried to edit wiki but it seems that I don't have the permission.
> How can I edit?
>
> - Akira
>


Re: Bringing Hadoop to Fedora

2013-07-10 Thread Steve Loughran
On 8 July 2013 19:28, Tim St Clair  wrote:

> Arun,
>
>
> https://issues.apache.org/jira/browse/HADOOP-9680 / 9623
>
>
fixing S3 is something that is niggling me as something I want to sit down
and do once the Swift stuff is in -and once we've clarified some quirks w/
the FS API, and got some more FS contract tests.

There's been about 3 move to JetS3t 9 JIRAs, they'll need to be merged in,
along with a test that tries to do a many GB file upload; the kind of test
you'd only do in-EC2.

I'm certainly not going to go near this until august; if other people can
play with it too that'd be great -but it does need some good stressing.


Re: protobuf 2.5.0 failure should be known by wiki

2013-07-10 Thread Suresh Srinivas
> Now I know HADOOP-9346 and HADOOP-9440 for enabling protobuf 2.5.0, but
> these issues have been left for months.
>

Have you seen the reason why these issues have not been fixed; see -
https://issues.apache.org/jira/browse/HADOOP-9346?focusedCommentId=13657835&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13657835

If you have a solution for that issue, I am glad to commit that patch.


>
> I tried to edit wiki but it seems that I don't have the permission.
> How can I edit?
>

Steve already answered this.

Regards,
Suresh


-- 
http://hortonworks.com/download/


Re: [DISCUSS] Hadoop SSO/Token Server Components

2013-07-10 Thread Larry McCay
All -

After combing through this thread - as well as the summit session summary 
thread, I think that we have the following two items that we can probably move 
forward with:

1. TokenAuth method - assuming this means the pluggable authentication 
mechanisms within the RPC layer (2 votes: Kai and Kyle)
2. An actual Hadoop Token format (2 votes: Brian and myself)

I propose that we attack both of these aspects as one. Let's provide the 
structure and interfaces of the pluggable framework for use in the RPC layer 
through leveraging Daryn's pluggability work and POC it with a particular token 
format (not necessarily the only format ever supported - we just need one to 
start). If there has already been work done in this area by anyone then please 
speak up and commit to providing a patch - so that we don't duplicate effort. 

@Daryn - is there a particular Jira or set of Jiras that we can look at to 
discern the pluggability mechanism details? Documentation of it would be great 
as well.
@Kai - do you have existing code for the pluggable token authentication 
mechanism - if not, we can take a stab at representing it with interfaces 
and/or POC code.
I can standup and say that we have a token format that we have been working 
with already and can provide a patch that represents it as a contribution to 
test out the pluggable tokenAuth.

These patches will provide progress toward code being the central discussion 
vehicle. As a community, we can then incrementally build on that foundation in 
order to collaboratively deliver the common vision.

In the absence of any other home for posting such patches, let's assume that 
they will be attached to HADOOP-9392 - or a dedicated subtask for this 
particular aspect/s - I will leave that detail to Kai.

@Alejandro, being the only voice on this thread that isn't represented in the 
votes above, please feel free to agree or disagree with this direction.

thanks,

--larry

On Jul 5, 2013, at 3:24 PM, Larry McCay  wrote:

> Hi Andy -
> 
>> Happy Fourth of July to you and yours.
> 
> Same to you and yours. :-)
> We had some fun in the sun for a change - we've had nothing but rain on the 
> east coast lately.
> 
>> My concern here is there may have been a misinterpretation or lack of
>> consensus on what is meant by "clean slate"
> 
> 
> Apparently so.
> On the pre-summit call, I stated that I was interested in reconciling the 
> jiras so that we had one to work from.
> 
> You recommended that we set them aside for the time being - with the 
> understanding that work would continue on your side (and our's as well) - and 
> approach the community discussion from a clean slate.
> We seemed to do this at the summit session quite well.
> It was my understanding that this community discussion would live beyond the 
> summit and continue on this list.
> 
> While closing the summit session we agreed to follow up on common-dev with 
> first a summary then a discussion of the moving parts.
> 
> I never expected the previous work to be abandoned and fully expected it to 
> inform the discussion that happened here.
> 
> If you would like to reframe what clean slate was supposed to mean or 
> describe what it means now - that would be welcome - before I waste anymore 
> time trying to facilitate a community discussion that is apparently not 
> wanted.
> 
>> Nowhere in this
>> picture are self appointed "master JIRAs" and such, which have been
>> disappointing to see crop up, we should be collaboratively coding not
>> planting flags.
> 
> I don't know what you mean by self-appointed master JIRAs.
> It has certainly not been anyone's intention to disappoint.
> Any mention of a new JIRA was just to have a clear context to gather the 
> agreed upon points - previous and/or existing JIRAs would easily be linked.
> 
> Planting flags… I need to go back and read my discussion point about the JIRA 
> and see how this is the impression that was made.
> That is not how I define success. The only flags that count is code. What we 
> are lacking is the roadmap on which to put the code.
> 
>> I read Kai's latest document as something approaching today's consensus (or
>> at least a common point of view?) rather than a historical document.
>> Perhaps he and it can be given equal share of the consideration.
> 
> I definitely read it as something that has evolved into something approaching 
> what we have been talking about so far. There has not however been enough 
> discussion anywhere near the level of detail in that document and more 
> details are needed for each component in the design. 
> Why the work in that document should not be fed into the community discussion 
> as anyone else's would be - I fail to understand.
> 
> My suggestion continues to be that you should take that document and speak to 
> the inventory of moving parts as we agreed.
> As these are agreed upon, we will ensure that the appropriate subtasks are 
> filed against whatever JIRA is to host them - don't really care much wh

[ANNOUNCE] Apache Hadoop 0.23.9 released

2013-07-10 Thread Thomas Graves
All,

Apache Hadoop 0.23.9 is now available.

This is a bug-fix release.

Please see release notes for details.

Thanks,
Tom Graves


Re: [DISCUSS] Hadoop SSO/Token Server Components

2013-07-10 Thread Alejandro Abdelnur
Larry, all,

Still is not clear to me what is the end state we are aiming for, or that
we even agree on that.

IMO, Instead trying to agree what to do, we should first  agree on the
final state, then we see what should be changed to there there, then we see
how we change things to get there.

The different documents out there focus more on how.

We not try to say how before we know what.

Thx.




On Wed, Jul 10, 2013 at 6:42 AM, Larry McCay  wrote:

> All -
>
> After combing through this thread - as well as the summit session summary
> thread, I think that we have the following two items that we can probably
> move forward with:
>
> 1. TokenAuth method - assuming this means the pluggable authentication
> mechanisms within the RPC layer (2 votes: Kai and Kyle)
> 2. An actual Hadoop Token format (2 votes: Brian and myself)
>
> I propose that we attack both of these aspects as one. Let's provide the
> structure and interfaces of the pluggable framework for use in the RPC
> layer through leveraging Daryn's pluggability work and POC it with a
> particular token format (not necessarily the only format ever supported -
> we just need one to start). If there has already been work done in this
> area by anyone then please speak up and commit to providing a patch - so
> that we don't duplicate effort.
>
> @Daryn - is there a particular Jira or set of Jiras that we can look at to
> discern the pluggability mechanism details? Documentation of it would be
> great as well.
> @Kai - do you have existing code for the pluggable token authentication
> mechanism - if not, we can take a stab at representing it with interfaces
> and/or POC code.
> I can standup and say that we have a token format that we have been
> working with already and can provide a patch that represents it as a
> contribution to test out the pluggable tokenAuth.
>
> These patches will provide progress toward code being the central
> discussion vehicle. As a community, we can then incrementally build on that
> foundation in order to collaboratively deliver the common vision.
>
> In the absence of any other home for posting such patches, let's assume
> that they will be attached to HADOOP-9392 - or a dedicated subtask for this
> particular aspect/s - I will leave that detail to Kai.
>
> @Alejandro, being the only voice on this thread that isn't represented in
> the votes above, please feel free to agree or disagree with this direction.
>
> thanks,
>
> --larry
>
> On Jul 5, 2013, at 3:24 PM, Larry McCay  wrote:
>
> > Hi Andy -
> >
> >> Happy Fourth of July to you and yours.
> >
> > Same to you and yours. :-)
> > We had some fun in the sun for a change - we've had nothing but rain on
> the east coast lately.
> >
> >> My concern here is there may have been a misinterpretation or lack of
> >> consensus on what is meant by "clean slate"
> >
> >
> > Apparently so.
> > On the pre-summit call, I stated that I was interested in reconciling
> the jiras so that we had one to work from.
> >
> > You recommended that we set them aside for the time being - with the
> understanding that work would continue on your side (and our's as well) -
> and approach the community discussion from a clean slate.
> > We seemed to do this at the summit session quite well.
> > It was my understanding that this community discussion would live beyond
> the summit and continue on this list.
> >
> > While closing the summit session we agreed to follow up on common-dev
> with first a summary then a discussion of the moving parts.
> >
> > I never expected the previous work to be abandoned and fully expected it
> to inform the discussion that happened here.
> >
> > If you would like to reframe what clean slate was supposed to mean or
> describe what it means now - that would be welcome - before I waste anymore
> time trying to facilitate a community discussion that is apparently not
> wanted.
> >
> >> Nowhere in this
> >> picture are self appointed "master JIRAs" and such, which have been
> >> disappointing to see crop up, we should be collaboratively coding not
> >> planting flags.
> >
> > I don't know what you mean by self-appointed master JIRAs.
> > It has certainly not been anyone's intention to disappoint.
> > Any mention of a new JIRA was just to have a clear context to gather the
> agreed upon points - previous and/or existing JIRAs would easily be linked.
> >
> > Planting flags… I need to go back and read my discussion point about the
> JIRA and see how this is the impression that was made.
> > That is not how I define success. The only flags that count is code.
> What we are lacking is the roadmap on which to put the code.
> >
> >> I read Kai's latest document as something approaching today's consensus
> (or
> >> at least a common point of view?) rather than a historical document.
> >> Perhaps he and it can be given equal share of the consideration.
> >
> > I definitely read it as something that has evolved into something
> approaching what we have been talking about so far. There has

Re: [DISCUSS] Hadoop SSO/Token Server Components

2013-07-10 Thread Daryn Sharp
Sorry for falling out of the loop.  I'm catching up the jiras and discussion, 
and will comment this afternoon.

Daryn

On Jul 10, 2013, at 8:42 AM, Larry McCay wrote:

> All -
> 
> After combing through this thread - as well as the summit session summary 
> thread, I think that we have the following two items that we can probably 
> move forward with:
> 
> 1. TokenAuth method - assuming this means the pluggable authentication 
> mechanisms within the RPC layer (2 votes: Kai and Kyle)
> 2. An actual Hadoop Token format (2 votes: Brian and myself)
> 
> I propose that we attack both of these aspects as one. Let's provide the 
> structure and interfaces of the pluggable framework for use in the RPC layer 
> through leveraging Daryn's pluggability work and POC it with a particular 
> token format (not necessarily the only format ever supported - we just need 
> one to start). If there has already been work done in this area by anyone 
> then please speak up and commit to providing a patch - so that we don't 
> duplicate effort. 
> 
> @Daryn - is there a particular Jira or set of Jiras that we can look at to 
> discern the pluggability mechanism details? Documentation of it would be 
> great as well.
> @Kai - do you have existing code for the pluggable token authentication 
> mechanism - if not, we can take a stab at representing it with interfaces 
> and/or POC code.
> I can standup and say that we have a token format that we have been working 
> with already and can provide a patch that represents it as a contribution to 
> test out the pluggable tokenAuth.
> 
> These patches will provide progress toward code being the central discussion 
> vehicle. As a community, we can then incrementally build on that foundation 
> in order to collaboratively deliver the common vision.
> 
> In the absence of any other home for posting such patches, let's assume that 
> they will be attached to HADOOP-9392 - or a dedicated subtask for this 
> particular aspect/s - I will leave that detail to Kai.
> 
> @Alejandro, being the only voice on this thread that isn't represented in the 
> votes above, please feel free to agree or disagree with this direction.
> 
> thanks,
> 
> --larry
> 
> On Jul 5, 2013, at 3:24 PM, Larry McCay  wrote:
> 
>> Hi Andy -
>> 
>>> Happy Fourth of July to you and yours.
>> 
>> Same to you and yours. :-)
>> We had some fun in the sun for a change - we've had nothing but rain on the 
>> east coast lately.
>> 
>>> My concern here is there may have been a misinterpretation or lack of
>>> consensus on what is meant by "clean slate"
>> 
>> 
>> Apparently so.
>> On the pre-summit call, I stated that I was interested in reconciling the 
>> jiras so that we had one to work from.
>> 
>> You recommended that we set them aside for the time being - with the 
>> understanding that work would continue on your side (and our's as well) - 
>> and approach the community discussion from a clean slate.
>> We seemed to do this at the summit session quite well.
>> It was my understanding that this community discussion would live beyond the 
>> summit and continue on this list.
>> 
>> While closing the summit session we agreed to follow up on common-dev with 
>> first a summary then a discussion of the moving parts.
>> 
>> I never expected the previous work to be abandoned and fully expected it to 
>> inform the discussion that happened here.
>> 
>> If you would like to reframe what clean slate was supposed to mean or 
>> describe what it means now - that would be welcome - before I waste anymore 
>> time trying to facilitate a community discussion that is apparently not 
>> wanted.
>> 
>>> Nowhere in this
>>> picture are self appointed "master JIRAs" and such, which have been
>>> disappointing to see crop up, we should be collaboratively coding not
>>> planting flags.
>> 
>> I don't know what you mean by self-appointed master JIRAs.
>> It has certainly not been anyone's intention to disappoint.
>> Any mention of a new JIRA was just to have a clear context to gather the 
>> agreed upon points - previous and/or existing JIRAs would easily be linked.
>> 
>> Planting flags… I need to go back and read my discussion point about the 
>> JIRA and see how this is the impression that was made.
>> That is not how I define success. The only flags that count is code. What we 
>> are lacking is the roadmap on which to put the code.
>> 
>>> I read Kai's latest document as something approaching today's consensus (or
>>> at least a common point of view?) rather than a historical document.
>>> Perhaps he and it can be given equal share of the consideration.
>> 
>> I definitely read it as something that has evolved into something 
>> approaching what we have been talking about so far. There has not however 
>> been enough discussion anywhere near the level of detail in that document 
>> and more details are needed for each component in the design. 
>> Why the work in that document should not be fed into the community 
>> discussion as anyone el

RE: [DISCUSS] Hadoop SSO/Token Server Components

2013-07-10 Thread Brian Swan
Hi Alejandro, all-

There seems to be agreement on the broad stroke description of the components 
needed to achieve pluggable token authentication (I'm sure I'll be corrected if 
that isn't the case). However, discussion of the details of those components 
doesn't seem to be moving forward. I think this is because the details are 
really best understood through code. I also see *a* (i.e. one of many possible) 
token format and pluggable authentication mechanisms within the RPC layer as 
components that can have immediate benefit to Hadoop users AND still allow 
flexibility in the larger design. So, I think the best way to move the 
conversation of "what we are aiming for" forward is to start looking at code 
for these components. I am especially interested in moving forward with 
pluggable authentication mechanisms within the RPC layer and would love to see 
what others have done in this area (if anything).

Thanks.

-Brian

-Original Message-
From: Alejandro Abdelnur [mailto:t...@cloudera.com] 
Sent: Wednesday, July 10, 2013 8:15 AM
To: Larry McCay
Cc: common-dev@hadoop.apache.org; da...@yahoo-inc.com; Kai Zheng
Subject: Re: [DISCUSS] Hadoop SSO/Token Server Components

Larry, all,

Still is not clear to me what is the end state we are aiming for, or that we 
even agree on that.

IMO, Instead trying to agree what to do, we should first  agree on the final 
state, then we see what should be changed to there there, then we see how we 
change things to get there.

The different documents out there focus more on how.

We not try to say how before we know what.

Thx.




On Wed, Jul 10, 2013 at 6:42 AM, Larry McCay  wrote:

> All -
>
> After combing through this thread - as well as the summit session 
> summary thread, I think that we have the following two items that we 
> can probably move forward with:
>
> 1. TokenAuth method - assuming this means the pluggable authentication 
> mechanisms within the RPC layer (2 votes: Kai and Kyle) 2. An actual 
> Hadoop Token format (2 votes: Brian and myself)
>
> I propose that we attack both of these aspects as one. Let's provide 
> the structure and interfaces of the pluggable framework for use in the 
> RPC layer through leveraging Daryn's pluggability work and POC it with 
> a particular token format (not necessarily the only format ever 
> supported - we just need one to start). If there has already been work 
> done in this area by anyone then please speak up and commit to 
> providing a patch - so that we don't duplicate effort.
>
> @Daryn - is there a particular Jira or set of Jiras that we can look 
> at to discern the pluggability mechanism details? Documentation of it 
> would be great as well.
> @Kai - do you have existing code for the pluggable token 
> authentication mechanism - if not, we can take a stab at representing 
> it with interfaces and/or POC code.
> I can standup and say that we have a token format that we have been 
> working with already and can provide a patch that represents it as a 
> contribution to test out the pluggable tokenAuth.
>
> These patches will provide progress toward code being the central 
> discussion vehicle. As a community, we can then incrementally build on 
> that foundation in order to collaboratively deliver the common vision.
>
> In the absence of any other home for posting such patches, let's 
> assume that they will be attached to HADOOP-9392 - or a dedicated 
> subtask for this particular aspect/s - I will leave that detail to Kai.
>
> @Alejandro, being the only voice on this thread that isn't represented 
> in the votes above, please feel free to agree or disagree with this direction.
>
> thanks,
>
> --larry
>
> On Jul 5, 2013, at 3:24 PM, Larry McCay  wrote:
>
> > Hi Andy -
> >
> >> Happy Fourth of July to you and yours.
> >
> > Same to you and yours. :-)
> > We had some fun in the sun for a change - we've had nothing but rain 
> > on
> the east coast lately.
> >
> >> My concern here is there may have been a misinterpretation or lack 
> >> of consensus on what is meant by "clean slate"
> >
> >
> > Apparently so.
> > On the pre-summit call, I stated that I was interested in 
> > reconciling
> the jiras so that we had one to work from.
> >
> > You recommended that we set them aside for the time being - with the
> understanding that work would continue on your side (and our's as 
> well) - and approach the community discussion from a clean slate.
> > We seemed to do this at the summit session quite well.
> > It was my understanding that this community discussion would live 
> > beyond
> the summit and continue on this list.
> >
> > While closing the summit session we agreed to follow up on 
> > common-dev
> with first a summary then a discussion of the moving parts.
> >
> > I never expected the previous work to be abandoned and fully 
> > expected it
> to inform the discussion that happened here.
> >
> > If you would like to reframe what clean slate was supposed to mean 
> > or
> describe what it means now 

Re: [DISCUSS] Hadoop SSO/Token Server Components

2013-07-10 Thread Larry McCay
It seems to me that we can have the best of both worlds here…it's all about the 
scoping.

If we were to reframe the immediate scope to the lowest common denominator of 
what is needed for accepting tokens in authentication plugins then we gain:

1. a very manageable scope to define and agree upon
2. a deliverable that should be useful in and of itself
3. a foundation for community collaboration that we build on for higher level 
solutions built on this lowest common denominator and experience as a working 
community

So, to Alejandro's point, perhaps we need to define what would make #2 above 
true - this could serve as the "what" we are building instead of the "how" to 
build it.
Including:
a. project structure within hadoop-common-project/common-security or the like
b. the usecases that would need to be enabled to make it a self contained and 
useful contribution - without higher level solutions
c. the JIRA/s for contributing patches
d. what specific patches will be needed to accomplished the usecases in #b

In other words, an end-state for the lowest common denominator that enables 
code patches in the near-term is the best of both worlds.

I think this may be a good way to bootstrap the collaboration process for our 
emerging security community rather than trying to tackle a huge vision all at 
once.

@Alejandro - if you have something else in mind that would bootstrap this 
process - that would great - please advise.

thoughts?

On Jul 10, 2013, at 1:06 PM, Brian Swan  wrote:

> Hi Alejandro, all-
> 
> There seems to be agreement on the broad stroke description of the components 
> needed to achieve pluggable token authentication (I'm sure I'll be corrected 
> if that isn't the case). However, discussion of the details of those 
> components doesn't seem to be moving forward. I think this is because the 
> details are really best understood through code. I also see *a* (i.e. one of 
> many possible) token format and pluggable authentication mechanisms within 
> the RPC layer as components that can have immediate benefit to Hadoop users 
> AND still allow flexibility in the larger design. So, I think the best way to 
> move the conversation of "what we are aiming for" forward is to start looking 
> at code for these components. I am especially interested in moving forward 
> with pluggable authentication mechanisms within the RPC layer and would love 
> to see what others have done in this area (if anything).
> 
> Thanks.
> 
> -Brian
> 
> -Original Message-
> From: Alejandro Abdelnur [mailto:t...@cloudera.com] 
> Sent: Wednesday, July 10, 2013 8:15 AM
> To: Larry McCay
> Cc: common-dev@hadoop.apache.org; da...@yahoo-inc.com; Kai Zheng
> Subject: Re: [DISCUSS] Hadoop SSO/Token Server Components
> 
> Larry, all,
> 
> Still is not clear to me what is the end state we are aiming for, or that we 
> even agree on that.
> 
> IMO, Instead trying to agree what to do, we should first  agree on the final 
> state, then we see what should be changed to there there, then we see how we 
> change things to get there.
> 
> The different documents out there focus more on how.
> 
> We not try to say how before we know what.
> 
> Thx.
> 
> 
> 
> 
> On Wed, Jul 10, 2013 at 6:42 AM, Larry McCay  wrote:
> 
>> All -
>> 
>> After combing through this thread - as well as the summit session 
>> summary thread, I think that we have the following two items that we 
>> can probably move forward with:
>> 
>> 1. TokenAuth method - assuming this means the pluggable authentication 
>> mechanisms within the RPC layer (2 votes: Kai and Kyle) 2. An actual 
>> Hadoop Token format (2 votes: Brian and myself)
>> 
>> I propose that we attack both of these aspects as one. Let's provide 
>> the structure and interfaces of the pluggable framework for use in the 
>> RPC layer through leveraging Daryn's pluggability work and POC it with 
>> a particular token format (not necessarily the only format ever 
>> supported - we just need one to start). If there has already been work 
>> done in this area by anyone then please speak up and commit to 
>> providing a patch - so that we don't duplicate effort.
>> 
>> @Daryn - is there a particular Jira or set of Jiras that we can look 
>> at to discern the pluggability mechanism details? Documentation of it 
>> would be great as well.
>> @Kai - do you have existing code for the pluggable token 
>> authentication mechanism - if not, we can take a stab at representing 
>> it with interfaces and/or POC code.
>> I can standup and say that we have a token format that we have been 
>> working with already and can provide a patch that represents it as a 
>> contribution to test out the pluggable tokenAuth.
>> 
>> These patches will provide progress toward code being the central 
>> discussion vehicle. As a community, we can then incrementally build on 
>> that foundation in order to collaboratively deliver the common vision.
>> 
>> In the absence of any other home for posting such patches, let's 
>> assu

Re: protobuf 2.5.0 failure should be known by wiki

2013-07-10 Thread Akira AJISAKA

Thank you for your comments.

> create a wiki account then ask for write access & we'll set you up

I created a wiki account. (https://wiki.apache.org/hadoop/AkiraAjisaka)
Would you set up for write access?


Now I know HADOOP-9346 and HADOOP-9440 for enabling protobuf 2.5.0, but
these issues have been left for months.



Have you seen the reason why these issues have not been fixed; see -
https://issues.apache.org/jira/browse/HADOOP-9346?focusedCommentId=13657835&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13657835

If you have a solution for that issue, I am glad to commit that patch.



I've seen it, but I don't have any solution now.
I'll try to find.

Regards,
Akira


[jira] [Created] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9718:
---

 Summary: Branch-1-win TestGroupFallback#testGroupWithFallback() 
failed caused by java.lang.UnsatisfiedLinkError
 Key: HADOOP-9718
 URL: https://issues.apache.org/jira/browse/HADOOP-9718
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win


Here is the error information:
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
at org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
Method)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
at 
org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: creating 2.2.0 version in JIRA

2013-07-10 Thread Alejandro Abdelnur
If that is the case, then I'll like to push the following JIRAs that have
been committed to branch-2 to branch-2.1 when the first RC was just out and
we didn't know that many more things would come in.

I'm planning to push these JIRAs mid afternoon PST today. If there is any
that should not make it, please speak up.

Thanks.


common:

HADOOP-9661. Allow metrics sources to be extended. (sandyr via tucu)

HADOOP-9370.  Write FSWrapper class to wrap FileSystem and FileContext
for
better test coverage.  (Andrew Wang via Colin Patrick McCabe)

HADOOP-9355.  Abstract symlink tests to use either FileContext or
FileSystem.  (Andrew Wang via Colin Patrick McCabe)

HADOOP-9673.  NetworkTopology: when a node can't be added, print out its
location for diagnostic purposes.  (Colin Patrick McCabe)

HADOOP-9414.  Refactor out FSLinkResolver and relevant helper methods.
(Andrew Wang via Colin Patrick McCabe)

HADOOP-9416.  Add new symlink resolution methods in FileSystem and
FileSystemLinkResolver.  (Andrew Wang via Colin Patrick McCabe)


hdfs:

HDFS-4908. Reduce snapshot inode memory usage.  (szetszwo)

yarn:

YARN-866. Add test for class ResourceWeights. (ywskycn via tucu)

YARN-736. Add a multi-resource fair sharing metric. (sandyr via tucu)

YARN-883. Expose Fair Scheduler-specific queue metrics. (sandyr via
tucu)

mapreduce:

MAPREDUCE-5333. Add test that verifies MRAM works correctly when sending
requests with non-normalized capabilities. (ywskycn via tucu)





On Tue, Jul 9, 2013 at 10:54 AM, Arun C Murthy  wrote:

>
> On Jul 2, 2013, at 3:54 PM, Alejandro Abdelnur  wrote:
>
> > We need clarification on this then.
> >
> > I was under the impression that branch-2 would be 2.2.0.
>
> Sorry, I missed this thread - thanks to Jason for pointing me.
>
> As we discussed, the idea was that we are not adding new features to the
> the beta release (2.1.x-beta) so that we can focus on stabilizing it and
> releasing as hadoop-2.2.0 i.e. GA of hadoop-2. See http://s.apache.org/lZ8
> .
>
> Hence, by default, new features goto branch-2 with fix-version as 2.3.x.
>
> Hope that makes sense. I'll fix branch-2 to set version to 2.3.0-SNAPSHOT
> to ease further confusion.
>
> thanks,
> Arun
>
> >
> > thx
> >
> > On Tue, Jul 2, 2013 at 2:38 PM, Jason Lowe  wrote:
> >
> >> I thought Arun intends for 2.2.0 to be created off of branch-2.1.0-beta
> >> and not off of branch-2.  As I understand it, only critical blockers
> will
> >> be the delta between 2.1.0-beta and 2.2.0 and items checked into
> branch-2
> >> should be marked as  fixed in 2.3.0.
> >>
> >> Part of the confusion is that currently branch-2 builds as
> 2.2.0-SNAPSHOT,
> >> but I believe Arun intended it to be 2.3.0-SNAPSHOT.
> >>
> >> Jason
> >>
> >>
> >> On 06/21/2013 12:05 PM, Alejandro Abdelnur wrote:
> >>
> >>> Thanks Suresh, didn't know that, will do.
> >>>
> >>>
> >>> On Fri, Jun 21, 2013 at 9:48 AM, Suresh Srinivas <
> sur...@hortonworks.com
>  wrote:
> >>>
> >>> I have added in to HDFS, HADOOP, MAPREDUCE projects. Can someone add it
>  for
>  YARN?
> 
> 
>  On Fri, Jun 21, 2013 at 9:35 AM, Alejandro Abdelnur <
> t...@cloudera.com
> 
> > wrote:
> > When Arun created branch-2.1-beta he stated:
> >
> > The expectation is that 2.2.0 will be limited to content in
> >>
> > branch-2.1-beta
> >
> >> and we stick to stabilizing it henceforth (I've deliberately not
> >>
> > created
> 
> > 2.2.0
> >
> >> fix-version on jira yet).
> >>
> > I working/committing some JIRAs that I'm putting in branch-2
> (testcases
> >
>  and
> 
> > improvements) but I don't want to put them in branch-2.1-beta as they
> > are
> > not critical and I don't won't add unnecessary noise to the
> >
>  branch-2.1-beta
> 
> > release work.
> >
> > Currently branch-2 POMs have a version 2.2.0 and the CHANGES.txt
> files
> > as
> > well.
> >
> > But because we did not create a JIRA version I cannot close those
> JIRAs.
> >
> > Can we please create the JIRA versions? later we can rename them.
> >
> > Thx
> >
> >
> > --
> > Alejandro
> >
> >
> 
>  --
>  http://hortonworks.com/**download/ 
> 
> 
> >>>
> >>>
> >>
> >
> >
> > --
> > Alejandro
>
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>


-- 
Alejandro


RE: [DISCUSS] Hadoop SSO/Token Server Components

2013-07-10 Thread Brian Swan
Thanks, Larry. That is what I was trying to say, but you've said it better and 
in more detail. :-) To extract from what you are saying: "If we were to reframe 
the immediate scope to the lowest common denominator of what is needed for 
accepting tokens in authentication plugins then we gain... an end-state for the 
lowest common denominator that enables code patches in the near-term is the 
best of both worlds."

-Brian

-Original Message-
From: Larry McCay [mailto:lmc...@hortonworks.com] 
Sent: Wednesday, July 10, 2013 10:40 AM
To: common-dev@hadoop.apache.org
Cc: da...@yahoo-inc.com; Kai Zheng; Alejandro Abdelnur
Subject: Re: [DISCUSS] Hadoop SSO/Token Server Components

It seems to me that we can have the best of both worlds here...it's all about 
the scoping.

If we were to reframe the immediate scope to the lowest common denominator of 
what is needed for accepting tokens in authentication plugins then we gain:

1. a very manageable scope to define and agree upon 2. a deliverable that 
should be useful in and of itself 3. a foundation for community collaboration 
that we build on for higher level solutions built on this lowest common 
denominator and experience as a working community

So, to Alejandro's point, perhaps we need to define what would make #2 above 
true - this could serve as the "what" we are building instead of the "how" to 
build it.
Including:
a. project structure within hadoop-common-project/common-security or the like 
b. the usecases that would need to be enabled to make it a self contained and 
useful contribution - without higher level solutions c. the JIRA/s for 
contributing patches d. what specific patches will be needed to accomplished 
the usecases in #b

In other words, an end-state for the lowest common denominator that enables 
code patches in the near-term is the best of both worlds.

I think this may be a good way to bootstrap the collaboration process for our 
emerging security community rather than trying to tackle a huge vision all at 
once.

@Alejandro - if you have something else in mind that would bootstrap this 
process - that would great - please advise.

thoughts?

On Jul 10, 2013, at 1:06 PM, Brian Swan  wrote:

> Hi Alejandro, all-
> 
> There seems to be agreement on the broad stroke description of the components 
> needed to achieve pluggable token authentication (I'm sure I'll be corrected 
> if that isn't the case). However, discussion of the details of those 
> components doesn't seem to be moving forward. I think this is because the 
> details are really best understood through code. I also see *a* (i.e. one of 
> many possible) token format and pluggable authentication mechanisms within 
> the RPC layer as components that can have immediate benefit to Hadoop users 
> AND still allow flexibility in the larger design. So, I think the best way to 
> move the conversation of "what we are aiming for" forward is to start looking 
> at code for these components. I am especially interested in moving forward 
> with pluggable authentication mechanisms within the RPC layer and would love 
> to see what others have done in this area (if anything).
> 
> Thanks.
> 
> -Brian
> 
> -Original Message-
> From: Alejandro Abdelnur [mailto:t...@cloudera.com]
> Sent: Wednesday, July 10, 2013 8:15 AM
> To: Larry McCay
> Cc: common-dev@hadoop.apache.org; da...@yahoo-inc.com; Kai Zheng
> Subject: Re: [DISCUSS] Hadoop SSO/Token Server Components
> 
> Larry, all,
> 
> Still is not clear to me what is the end state we are aiming for, or that we 
> even agree on that.
> 
> IMO, Instead trying to agree what to do, we should first  agree on the final 
> state, then we see what should be changed to there there, then we see how we 
> change things to get there.
> 
> The different documents out there focus more on how.
> 
> We not try to say how before we know what.
> 
> Thx.
> 
> 
> 
> 
> On Wed, Jul 10, 2013 at 6:42 AM, Larry McCay  wrote:
> 
>> All -
>> 
>> After combing through this thread - as well as the summit session 
>> summary thread, I think that we have the following two items that we 
>> can probably move forward with:
>> 
>> 1. TokenAuth method - assuming this means the pluggable 
>> authentication mechanisms within the RPC layer (2 votes: Kai and 
>> Kyle) 2. An actual Hadoop Token format (2 votes: Brian and myself)
>> 
>> I propose that we attack both of these aspects as one. Let's provide 
>> the structure and interfaces of the pluggable framework for use in 
>> the RPC layer through leveraging Daryn's pluggability work and POC it 
>> with a particular token format (not necessarily the only format ever 
>> supported - we just need one to start). If there has already been 
>> work done in this area by anyone then please speak up and commit to 
>> providing a patch - so that we don't duplicate effort.
>> 
>> @Daryn - is there a particular Jira or set of Jiras that we can look 
>> at to discern the pluggability mechanism details? Documentation of it 
>> wou

[jira] [Resolved] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-9718.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

+1 for the patch.  I tested it successfully on Mac and Windows.  I committed 
this to branch-1-win.  Thank you for the contribution, Xi.

> Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
> java.lang.UnsatisfiedLinkError
> --
>
> Key: HADOOP-9718
> URL: https://issues.apache.org/jira/browse/HADOOP-9718
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
> Fix For: 1-win
>
> Attachments: HADOOP-9718.patch
>
>
> Here is the error information:
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
> Method)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
> at 
> org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
> This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: creating 2.2.0 version in JIRA

2013-07-10 Thread Arun C Murthy
Sounds good. I'll re-create branch-2.1.0-beta from branch-2.1-beta when the 
last 2 blockers are in.

thanks,
Arun

On Jul 10, 2013, at 10:56 AM, Alejandro Abdelnur  wrote:

> If that is the case, then I'll like to push the following JIRAs that have
> been committed to branch-2 to branch-2.1 when the first RC was just out and
> we didn't know that many more things would come in.
> 
> I'm planning to push these JIRAs mid afternoon PST today. If there is any
> that should not make it, please speak up.
> 
> Thanks.
> 
> 
> common:
> 
>HADOOP-9661. Allow metrics sources to be extended. (sandyr via tucu)
> 
>HADOOP-9370.  Write FSWrapper class to wrap FileSystem and FileContext
> for
>better test coverage.  (Andrew Wang via Colin Patrick McCabe)
> 
>HADOOP-9355.  Abstract symlink tests to use either FileContext or
>FileSystem.  (Andrew Wang via Colin Patrick McCabe)
> 
>HADOOP-9673.  NetworkTopology: when a node can't be added, print out its
>location for diagnostic purposes.  (Colin Patrick McCabe)
> 
>HADOOP-9414.  Refactor out FSLinkResolver and relevant helper methods.
>(Andrew Wang via Colin Patrick McCabe)
> 
>HADOOP-9416.  Add new symlink resolution methods in FileSystem and
>FileSystemLinkResolver.  (Andrew Wang via Colin Patrick McCabe)
> 
> 
> hdfs:
> 
>HDFS-4908. Reduce snapshot inode memory usage.  (szetszwo)
> 
> yarn:
> 
>YARN-866. Add test for class ResourceWeights. (ywskycn via tucu)
> 
>YARN-736. Add a multi-resource fair sharing metric. (sandyr via tucu)
> 
>YARN-883. Expose Fair Scheduler-specific queue metrics. (sandyr via
> tucu)
> 
> mapreduce:
> 
>MAPREDUCE-5333. Add test that verifies MRAM works correctly when sending
>requests with non-normalized capabilities. (ywskycn via tucu)
> 
> 
> 
> 
> 
> On Tue, Jul 9, 2013 at 10:54 AM, Arun C Murthy  wrote:
> 
>> 
>> On Jul 2, 2013, at 3:54 PM, Alejandro Abdelnur  wrote:
>> 
>>> We need clarification on this then.
>>> 
>>> I was under the impression that branch-2 would be 2.2.0.
>> 
>> Sorry, I missed this thread - thanks to Jason for pointing me.
>> 
>> As we discussed, the idea was that we are not adding new features to the
>> the beta release (2.1.x-beta) so that we can focus on stabilizing it and
>> releasing as hadoop-2.2.0 i.e. GA of hadoop-2. See http://s.apache.org/lZ8
>> .
>> 
>> Hence, by default, new features goto branch-2 with fix-version as 2.3.x.
>> 
>> Hope that makes sense. I'll fix branch-2 to set version to 2.3.0-SNAPSHOT
>> to ease further confusion.
>> 
>> thanks,
>> Arun
>> 
>>> 
>>> thx
>>> 
>>> On Tue, Jul 2, 2013 at 2:38 PM, Jason Lowe  wrote:
>>> 
 I thought Arun intends for 2.2.0 to be created off of branch-2.1.0-beta
 and not off of branch-2.  As I understand it, only critical blockers
>> will
 be the delta between 2.1.0-beta and 2.2.0 and items checked into
>> branch-2
 should be marked as  fixed in 2.3.0.
 
 Part of the confusion is that currently branch-2 builds as
>> 2.2.0-SNAPSHOT,
 but I believe Arun intended it to be 2.3.0-SNAPSHOT.
 
 Jason
 
 
 On 06/21/2013 12:05 PM, Alejandro Abdelnur wrote:
 
> Thanks Suresh, didn't know that, will do.
> 
> 
> On Fri, Jun 21, 2013 at 9:48 AM, Suresh Srinivas <
>> sur...@hortonworks.com
>> wrote:
> 
> I have added in to HDFS, HADOOP, MAPREDUCE projects. Can someone add it
>> for
>> YARN?
>> 
>> 
>> On Fri, Jun 21, 2013 at 9:35 AM, Alejandro Abdelnur <
>> t...@cloudera.com
>> 
>>> wrote:
>>> When Arun created branch-2.1-beta he stated:
>>> 
>>> The expectation is that 2.2.0 will be limited to content in
 
>>> branch-2.1-beta
>>> 
 and we stick to stabilizing it henceforth (I've deliberately not
 
>>> created
>> 
>>> 2.2.0
>>> 
 fix-version on jira yet).
 
>>> I working/committing some JIRAs that I'm putting in branch-2
>> (testcases
>>> 
>> and
>> 
>>> improvements) but I don't want to put them in branch-2.1-beta as they
>>> are
>>> not critical and I don't won't add unnecessary noise to the
>>> 
>> branch-2.1-beta
>> 
>>> release work.
>>> 
>>> Currently branch-2 POMs have a version 2.2.0 and the CHANGES.txt
>> files
>>> as
>>> well.
>>> 
>>> But because we did not create a JIRA version I cannot close those
>> JIRAs.
>>> 
>>> Can we please create the JIRA versions? later we can rename them.
>>> 
>>> Thx
>>> 
>>> 
>>> --
>>> Alejandro
>>> 
>>> 
>> 
>> --
>> http://hortonworks.com/**download/ 
>> 
>> 
> 
> 
 
>>> 
>>> 
>>> --
>>> Alejandro
>> 
>> --
>> Arun C. Murthy
>> Hortonworks Inc.
>> http://hortonworks.com/
>> 
>> 
>> 
> 
> 
> -- 
> Alejandro

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




[jira] [Created] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9719:
---

 Summary: Branch-1-win TestFsShellReturnCode#testChgrp() failed 
caused by incorrect exit codes
 Key: HADOOP-9719
 URL: https://issues.apache.org/jira/browse/HADOOP-9719
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to change 
group association of files to "admin".
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { "-chgrp", "admin", f1 };
verify(fs, "-chgrp", argv, 1, fsShell, 0);
.
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
original Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9721) Incorrect logging.properties file for hadoop-httpfs

2013-07-10 Thread Mark Grover (JIRA)
Mark Grover created HADOOP-9721:
---

 Summary: Incorrect logging.properties file for hadoop-httpfs
 Key: HADOOP-9721
 URL: https://issues.apache.org/jira/browse/HADOOP-9721
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, conf
Affects Versions: 2.0.4-alpha
 Environment: Maven 3.0.2 on CentOS6.2
Reporter: Mark Grover


Tomcat ships with a default logging.properties file that's generic enough to be 
used however we already override it with a custom log file as seen at 
https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557

This is necessary because we can have the log locations controlled by 
${httpfs.log.dir} (instead of default ${catalina.base}/logs}, control the 
prefix of the log files names, etc.

In any case, this overriding doesn't always happen. In my environment, the 
custom logging.properties file doesn't get overridden. The reason is the 
destination logging.properties file already exists and the maven pom's copy 
command silently fails and doesn't override. If we explicitly delete the 
destination logging.properties file, then the copy command successfully 
completes. You may notice, we do the same thing with server.xml (which doesn't 
have this problem). We explicitly delete the destination file first and then 
copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-9719.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

+1 for the patch.  I committed this to branch-1-win.  Thank you again, Xi!

> Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
> exit codes
> 
>
> Key: HADOOP-9719
> URL: https://issues.apache.org/jira/browse/HADOOP-9719
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
>  Labels: test
> Fix For: 1-win
>
> Attachments: HADOOP-9719.patch
>
>
> TestFsShellReturnCode#testChgrp() failed when we try to use "-chgrp" to 
> change group association of files to "admin".
> {code}
> // Test 1: exit code for chgrp on existing file is 0
> String argv[] = { "-chgrp", "admin", f1 };
> verify(fs, "-chgrp", argv, 1, fsShell, 0);
> {code}
> On Windows, this is the error information:
> org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
> (1332): No mapping between account names and security IDs was done.
> Invalid group name: admin
> This test case passed previously, but it looks like this test case 
> incorrectly passed because of another bug in FsShell@runCmdHandler 
> (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
> FsShell#runCmdHandler may not return error exit codes for some exceptions 
> (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
> FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
> previous Branch-1-win if even if admin is not a valid group, there is no 
> error caught. The fix of HADOOP-9502 makes this test fail.
> This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9722) Branch-1-win TestNativeIO failed caused by Window incompatible test case

2013-07-10 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9722:
---

 Summary: Branch-1-win TestNativeIO failed caused by Window 
incompatible test case
 Key: HADOOP-9722
 URL: https://issues.apache.org/jira/browse/HADOOP-9722
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


org.apache.hadoop.io.nativeio.TestNativeIO#testPosixFadvise() failed on 
Windows. Here is the error information.
\dev\zero (The system cannot find the path specified)
java.io.FileNotFoundException: \dev\zero (The system cannot find the path 
specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:120)
at java.io.FileInputStream.(FileInputStream.java:79)
at 
org.apache.hadoop.io.nativeio.TestNativeIO.testPosixFadvise(TestNativeIO.java:277)
The root cause of this is "/dev/zero" is used and Windows does not have devices 
like the unix /dev/zero or /dev/random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: creating 2.2.0 version in JIRA

2013-07-10 Thread Alejandro Abdelnur
I've just committed to branch-2.1 the following JIRAs that were only in
branch-2 due to misunderstanding (per previous email).

I've also updated all CHANGES.txt in trunk/branch-2/branch-2.1

Thanks.

Alejandro

MAPREDUCE-5333. Add test that verifies MRAM works correctly when sending
requests with non-normalized capabilities. (ywskycn via tucu)

HADOOP-9661. Allow metrics sources to be extended. (sandyr via tucu)

HADOOP-9355.  Abstract symlink tests to use either FileContext or
FileSystem.  (Andrew Wang via Colin Patrick McCabe)

HADOOP-9673.  NetworkTopology: when a node can't be added, print out its
location for diagnostic purposes.  (Colin Patrick McCabe)

HADOOP-9414.  Refactor out FSLinkResolver and relevant helper methods.
(Andrew Wang via Colin Patrick McCabe)

HADOOP-9416.  Add new symlink resolution methods in FileSystem and
FileSystemLinkResolver.  (Andrew Wang via Colin Patrick McCabe)

HDFS-4797. BlockScanInfo does not override equals(..) and hashCode()
consistently.  (szetszwo)

YARN-866. Add test for class ResourceWeights. (ywskycn via tucu)

YARN-736. Add a multi-resource fair sharing metric. (sandyr via tucu)

YARN-883. Expose Fair Scheduler-specific queue metrics. (sandyr via tucu)



On Wed, Jul 10, 2013 at 12:58 PM, Arun C Murthy  wrote:

> Sounds good. I'll re-create branch-2.1.0-beta from branch-2.1-beta when
> the last 2 blockers are in.
>
> thanks,
> Arun
>
> On Jul 10, 2013, at 10:56 AM, Alejandro Abdelnur 
> wrote:
>
> > If that is the case, then I'll like to push the following JIRAs that have
> > been committed to branch-2 to branch-2.1 when the first RC was just out
> and
> > we didn't know that many more things would come in.
> >
> > I'm planning to push these JIRAs mid afternoon PST today. If there is any
> > that should not make it, please speak up.
> >
> > Thanks.
> >
> > 
> > common:
> >
> >HADOOP-9661. Allow metrics sources to be extended. (sandyr via tucu)
> >
> >HADOOP-9370.  Write FSWrapper class to wrap FileSystem and FileContext
> > for
> >better test coverage.  (Andrew Wang via Colin Patrick McCabe)
> >
> >HADOOP-9355.  Abstract symlink tests to use either FileContext or
> >FileSystem.  (Andrew Wang via Colin Patrick McCabe)
> >
> >HADOOP-9673.  NetworkTopology: when a node can't be added, print out
> its
> >location for diagnostic purposes.  (Colin Patrick McCabe)
> >
> >HADOOP-9414.  Refactor out FSLinkResolver and relevant helper methods.
> >(Andrew Wang via Colin Patrick McCabe)
> >
> >HADOOP-9416.  Add new symlink resolution methods in FileSystem and
> >FileSystemLinkResolver.  (Andrew Wang via Colin Patrick McCabe)
> >
> >
> > hdfs:
> >
> >HDFS-4908. Reduce snapshot inode memory usage.  (szetszwo)
> >
> > yarn:
> >
> >YARN-866. Add test for class ResourceWeights. (ywskycn via tucu)
> >
> >YARN-736. Add a multi-resource fair sharing metric. (sandyr via tucu)
> >
> >YARN-883. Expose Fair Scheduler-specific queue metrics. (sandyr via
> > tucu)
> >
> > mapreduce:
> >
> >MAPREDUCE-5333. Add test that verifies MRAM works correctly when
> sending
> >requests with non-normalized capabilities. (ywskycn via tucu)
> >
> > 
> >
> >
> >
> > On Tue, Jul 9, 2013 at 10:54 AM, Arun C Murthy 
> wrote:
> >
> >>
> >> On Jul 2, 2013, at 3:54 PM, Alejandro Abdelnur 
> wrote:
> >>
> >>> We need clarification on this then.
> >>>
> >>> I was under the impression that branch-2 would be 2.2.0.
> >>
> >> Sorry, I missed this thread - thanks to Jason for pointing me.
> >>
> >> As we discussed, the idea was that we are not adding new features to the
> >> the beta release (2.1.x-beta) so that we can focus on stabilizing it and
> >> releasing as hadoop-2.2.0 i.e. GA of hadoop-2. See
> http://s.apache.org/lZ8
> >> .
> >>
> >> Hence, by default, new features goto branch-2 with fix-version as 2.3.x.
> >>
> >> Hope that makes sense. I'll fix branch-2 to set version to
> 2.3.0-SNAPSHOT
> >> to ease further confusion.
> >>
> >> thanks,
> >> Arun
> >>
> >>>
> >>> thx
> >>>
> >>> On Tue, Jul 2, 2013 at 2:38 PM, Jason Lowe 
> wrote:
> >>>
>  I thought Arun intends for 2.2.0 to be created off of
> branch-2.1.0-beta
>  and not off of branch-2.  As I understand it, only critical blockers
> >> will
>  be the delta between 2.1.0-beta and 2.2.0 and items checked into
> >> branch-2
>  should be marked as  fixed in 2.3.0.
> 
>  Part of the confusion is that currently branch-2 builds as
> >> 2.2.0-SNAPSHOT,
>  but I believe Arun intended it to be 2.3.0-SNAPSHOT.
> 
>  Jason
> 
> 
>  On 06/21/2013 12:05 PM, Alejandro Abdelnur wrote:
> 
> > Thanks Suresh, didn't know that, will do.
> >
> >
> > On Fri, Jun 21, 2013 at 9:48 AM, Suresh Srinivas <
> >> sur...@hortonworks.com
> >> wrote:
> >
> > I have added in to HDFS, HADOOP, MAPREDUCE projects. Can someone add
> it
> >> for
> >> YARN?
> >>
> >>
> >> On Fri, Jun 21, 20

[jira] [Resolved] (HADOOP-9722) Branch-1-win TestNativeIO failed caused by Window incompatible test case

2013-07-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-9722.
---

  Resolution: Fixed
Target Version/s: 1-win
Hadoop Flags: Reviewed

+1 for the patch.  I committed this to branch-1-win.  Thank you for the 
contribution, Xi.

> Branch-1-win TestNativeIO failed caused by Window incompatible test case
> 
>
> Key: HADOOP-9722
> URL: https://issues.apache.org/jira/browse/HADOOP-9722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
> Environment: Windows
>Reporter: Xi Fang
>Assignee: Xi Fang
>Priority: Minor
> Fix For: 1-win
>
> Attachments: HADOOP-9722.patch
>
>
> org.apache.hadoop.io.nativeio.TestNativeIO#testPosixFadvise() failed on 
> Windows. Here is the error information.
> \dev\zero (The system cannot find the path specified)
> java.io.FileNotFoundException: \dev\zero (The system cannot find the path 
> specified)
> at java.io.FileInputStream.open(Native Method)
> at java.io.FileInputStream.(FileInputStream.java:120)
> at java.io.FileInputStream.(FileInputStream.java:79)
> at 
> org.apache.hadoop.io.nativeio.TestNativeIO.testPosixFadvise(TestNativeIO.java:277)
> The root cause of this is "/dev/zero" is used and Windows does not have 
> devices like the unix /dev/zero or /dev/random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9723) Improve error message when hadoop archive output path already exists

2013-07-10 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-9723:
---

 Summary: Improve error message when hadoop archive output path 
already exists
 Key: HADOOP-9723
 URL: https://issues.apache.org/jira/browse/HADOOP-9723
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Stephen Chu
Priority: Trivial


When creating a hadoop archive and specifying an output path of an already 
existing file, we get an "Invalid Output" error message.

{code}
[schu@hdfs-vanilla-1 ~]$ hadoop archive -archiveName foo.har -p /user/schu 
testDir1 /user/schu
Invalid Output: /user/schu/foo.har
{code}

This error can be improved to tell users immediately that the output path 
already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9724) Trying to access har files within a har file complains about no index

2013-07-10 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-9724:
---

 Summary: Trying to access har files within a har file complains 
about no index
 Key: HADOOP-9724
 URL: https://issues.apache.org/jira/browse/HADOOP-9724
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Stephen Chu
Priority: Minor


If a har file contains another har file, accessing the inner har file through 
FsShell will complain about no index file, even if the index file exists.

{code}
[schu@hdfs-vanilla-1 ~]$ hdfs dfs -ls 
har:///user/schu/foo4.har/testDir1/testDir2/foo3.har
ls: Invalid path for the Har Filesystem. No index file in 
har:/user/schu/foo4.har/testDir1/testDir2/foo3.har
[schu@hdfs-vanilla-1 ~]$ hdfs dfs -ls /user/schu/testDir1/testDir2/foo3.har
Found 4 items
-rw-r--r--   1 schu supergroup  0 2013-07-10 23:22 
/user/schu/testDir1/testDir2/foo3.har/_SUCCESS
-rw-r--r--   5 schu supergroup 91 2013-07-10 23:22 
/user/schu/testDir1/testDir2/foo3.har/_index
-rw-r--r--   5 schu supergroup 22 2013-07-10 23:22 
/user/schu/testDir1/testDir2/foo3.har/_masterindex
-rw-r--r--   1 schu supergroup  0 2013-07-10 23:22 
/user/schu/testDir1/testDir2/foo3.har/part-0
[schu@hdfs-vanilla-1 ~]$ hdfs dfs -ls 
har:///user/schu/testDir1/testDir2/foo3.har
Found 1 items
drwxr-xr-x   - schu supergroup  0 2013-07-10 23:22 
har:///user/schu/testDir1/testDir2/foo3.har/testDir1
[schu@hdfs-vanilla-1 ~]$ 
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira