Build failed in Jenkins: Hadoop-Common-trunk #881

2013-09-04 Thread Apache Jenkins Server
See 

Changes:

[ivanmi] HADOOP-9924. FileUtil.createJarWithClassPath() does not generate 
relative classpath correctly. Contributed by Shanyu Zhao.

[vinodkv] YARN-1124. Modified YARN CLI application list to display new and 
submitted applications together with running apps by default, following up 
YARN-1074. Contributed by Xuan Gong.

[kihwal] HDFS-5150. Allow per NN SPN for internal SPNEGO. Contributed By Kihwal 
Lee.

--
[...truncated 55278 lines...]
Adding reference: maven.project
Adding reference: maven.project.helper
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.6/maven-antrun-plugin-1.6.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir -> 

Setting project property: test.exclude.pattern -> _
Setting project property: hadoop.assemblies.version -> 3.0.0-SNAPSHOT
Setting project property: test.exclude -> _
Setting project property: distMgmtSnapshotsId -> apache.snapshots.https
Setting project property: project.build.sourceEncoding -> UTF-8
Setting project property: java.security.egd -> file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl -> 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl -> 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: test.build.data -> 

Setting project property: commons-daemon.version -> 1.0.13
Setting project property: hadoop.common.build.dir -> 

Setting project property: testsThreadCount -> 4
Setting project property: maven.test.redirectTestOutputToFile -> true
Setting project property: jdiff.version -> 1.0.9
Setting project property: build.platform -> Linux-i386-32
Setting project property: distMgmtStagingName -> Apache Release Distribution 
Repository
Setting project property: project.reporting.outputEncoding -> UTF-8
Setting project property: protobuf.version -> 2.5.0
Setting project property: failIfNoTests -> false
Setting project property: protoc.path -> ${env.HADOOP_PROTOC_PATH}
Setting project property: distMgmtStagingId -> apache.staging.https
Setting project property: distMgmtSnapshotsName -> Apache Development Snapshot 
Repository
Setting project property: ant.file -> 

[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId -> org.apache.hadoop
Setting project property: project.artifactId -> hadoop-common-project
Setting project property: project.name -> Apache Hadoop Common Project
Setting project property: project.description -> Apache Hadoop Common Project
Setting project property: project.version -> 3.0.0-SNAPSHOT
Setting project property: project.packaging -> pom
Setting project property: project.build.directory -> 

Setting project property: project.build.outputDirectory -> 

Setting project property: project.build.testOutputDirectory -> 

Setting project property: project.build.sourceDirectory -> 

Setting project property: project.build.testSourceDirectory -> 

Setting project property: localRepository ->id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none
Setti

Re: [DISCUSS] Hadoop SSO/Token Server Components

2013-09-04 Thread Larry McCay
Chris -

I am curious whether there are any guidelines for feature branch use.

The general goals should be to:
* keep branches as small and as easily reviewable as possible for a given
feature
* decouple the pluggable framework from any specific central server
implementation
* scope specific content into iterations that can be merged into trunk on
their own and then development continued in new branches for the next
iteration

So, I guess the questions that immediately come to mind are:
1. Is there a document that describes the best way to do this?
2. How best do we leverage code being done in one feature branch within
another?

Thanks!

--larry



On Tue, Sep 3, 2013 at 10:00 PM, Zheng, Kai  wrote:

> This looks good and reasonable to me. Thanks Chris.
>
> -Original Message-
> From: Chris Douglas [mailto:cdoug...@apache.org]
> Sent: Wednesday, September 04, 2013 6:45 AM
> To: common-dev@hadoop.apache.org
> Subject: Re: [DISCUSS] Hadoop SSO/Token Server Components
>
> On Tue, Sep 3, 2013 at 5:20 AM, Larry McCay 
> wrote:
> > One outstanding question for me - how do we go about getting the
> > branches created?
>
> Once a group has converged on a purpose- ideally with some initial code
> from JIRA- please go ahead and create the feature branch in svn.
> There's no ceremony. -C
>
> > On Tue, Aug 6, 2013 at 6:22 PM, Chris Nauroth  >wrote:
> >
> >> Near the bottom of the bylaws, it states that addition of a "New
> >> Branch Committer" requires "Lazy consensus of active PMC members."  I
> >> think this means that you'll need to get a PMC member to sponsor the
> vote for you.
> >>  Regular committer votes happen on the private PMC mailing list, and
> >> I assume it would be the same for a branch committer vote.
> >>
> >> http://hadoop.apache.org/bylaws.html
> >>
> >> Chris Nauroth
> >> Hortonworks
> >> http://hortonworks.com/
> >>
> >>
> >>
> >> On Tue, Aug 6, 2013 at 2:48 PM, Larry McCay 
> >> wrote:
> >>
> >> > That sounds perfect!
> >> > I have been thinking of late that we would maybe need an incubator
> >> project
> >> > or something for this - which would be unfortunate.
> >> >
> >> > This would allow us to move much more quickly with a set of patches
> >> broken
> >> > up into consumable/understandable chunks that are made functional
> >> > more easily within the branch.
> >> > I assume that we need to start a separate thread for DISCUSS or
> >> > VOTE to start that process - correct?
> >> >
> >> > On Aug 6, 2013, at 4:15 PM, Alejandro Abdelnur 
> >> wrote:
> >> >
> >> > > yep, that is what I meant. Thanks Chris
> >> > >
> >> > >
> >> > > On Tue, Aug 6, 2013 at 1:12 PM, Chris Nauroth <
> >> cnaur...@hortonworks.com
> >> > >wrote:
> >> > >
> >> > >> Perhaps this is also a good opportunity to try out the new
> >> > >> "branch committers" clause in the bylaws, enabling
> >> > >> non-committers who are
> >> > working
> >> > >> on this to commit to the feature branch.
> >> > >>
> >> > >>
> >> > >>
> >> >
> >> http://mail-archives.apache.org/mod_mbox/hadoop-general/201308.mbox/%
> >> 3CCACO5Y4we4d8knB_xU3a=hr2gbeqo5m3vau+inba0li1i9e2...@mail.gmail.com%
> >> 3E
> >> > >>
> >> > >> Chris Nauroth
> >> > >> Hortonworks
> >> > >> http://hortonworks.com/
> >> > >>
> >> > >>
> >> > >>
> >> > >> On Tue, Aug 6, 2013 at 1:04 PM, Alejandro Abdelnur
> >> > >>  >> > >>> wrote:
> >> > >>
> >> > >>> Larry,
> >> > >>>
> >> > >>> Sorry for the delay answering. Thanks for laying down things,
> >> > >>> yes, it
> >> > >> makes
> >> > >>> sense.
> >> > >>>
> >> > >>> Given the large scope of the changes, number of JIRAs and
> >> > >>> number of developers involved, wouldn't make sense to create a
> >> > >>> feature branch
> >> for
> >> > >> all
> >> > >>> this work not to destabilize (more ;) trunk?
> >> > >>>
> >> > >>> Thanks again.
> >> > >>>
> >> > >>>
> >> > >>> On Tue, Jul 30, 2013 at 9:43 AM, Larry McCay
> >> > >>>  >> >
> >> > >>> wrote:
> >> > >>>
> >> >  The following JIRA was filed to provide a token and basic
> >> >  authority implementation for this effort:
> >> >  https://issues.apache.org/jira/browse/HADOOP-9781
> >> > 
> >> >  I have attached an initial patch though have yet to submit it
> >> >  as one
> >> > >>> since
> >> >  it is dependent on the patch for CMF that was posted to:
> >> >  https://issues.apache.org/jira/browse/HADOOP-9534
> >> >  and this patch still has a couple outstanding issues - javac
> >> warnings
> >> > >> for
> >> >  com.sun classes for certification generation and 11 javadoc
> >> warnings.
> >> > 
> >> >  Please feel free to review the patches and raise any questions
> >> >  or
> >> > >>> concerns
> >> >  related to them.
> >> > 
> >> >  On Jul 26, 2013, at 8:59 PM, Larry McCay
> >> >  
> >> > >> wrote:
> >> > 
> >> > > Hello All -
> >> > >
> >> > > In an effort to scope an initial iteration that provides
> >> > > value to
> >> the
> >> >  community while focusing on the pluggable authenti

[DISCUSS] Security Efforts and Branching

2013-09-04 Thread larry mccay
Hello Kai, Jerry and common-dev'ers -

I would like to try and get a game plan together for how we go about
getting some of these larger security changes into branches that are
manageable, reviewable and ultimately mergeable in a timely manner.

In order to even start this discussion, I think we need an inventory of the
high level projects that are underway in parallel. We can then identify
those that are at the point where patches can be used to seed a branch.
This will give us some insight into how to break it into phases.

Off the top of my head, I can think of the following high level efforts:

1. Pluggable Authentication and Token based SSO
2. CryptoFS for volume level encryption
3. Hive Table/Column Level Encryption (admittedly this is Hive work but it
will leverage common work done in Hadoop)
4. Authorization

Now, #1 and #2 above have related Jiras and a number of patches available
and are therefore early contenders for branching.

#1 has a draft for an initial iteration that was discussed in another
thread and I will attach a pdf version of the iteration-1 proposal to this
mail.

I propose that we converge on an initial plan based on further discussion
of the attached iteration and file a Jira to represent that iteration. We
can then break down the larger patches on existing Jiras to fit into the
constrained scope of the agreed upon iteration and attach them to subtasks
of the iteration Jira.

We can then seed a Pluggable Authentication and Token based SSO branch with
those related patches from H-9392, H-9534, H-9781.

Now, whether we introduce a whole central sso service in that branch is up
for discussion but I personally think that it will violate the "keeping it
small and manageable" goal. I am wondering whether a branch for security
services would do well to decouple the consumers from a specific
implementation that happens to be remote. Then within the Pluggable
Authentication branch - we can concentrate on the consumer level and local
implementations.

I assume that the CryptoFS work is also intended to be done within the
branches and we have to therefore consider how to leverage common code for
things like key access for encryption/decryption and signing/verifying.
This sort of thing is being introduced by H-9534 as part of the Pluggable
Authentication branch in support of JWT tokens. So, we will have to think
through what branches are required for Crypto in the near term.

Perhaps, we can concentrate on those portions of crypto that will be of
immediate benefit to iteration-1 and leave higher order CryptoFS stuff to
another iteration? I don't think that we want an explosion of branches at
any given time. If we can limit it to specific areas, close down on the
iteration and get it merged before creating a new set of branches that
would be best. Again, ease of review, test and merge is important for us.

I am curious how development across related branches like these would work
though. If the service work needs to leverage work from the other how do we
do that easily. Can we branch a branch? Will that require both to be ready
to merge at the same time?

Perhaps, low-level dependencies can be duplicated for some time and then
consolidated later?

Anyway, specific questions:

Does the proposal to start with the attached iteration-1 draft to create an
iteration Jira make sense to everyone?

Does anyone have specific suggestions regarding the best way for managing
branches that should be decoupled but at the same time leverage common code?

Any other thoughts or insight?

thanks,

--larry


[jira] [Created] (HADOOP-9933) Augment Service model to support starting stopped services

2013-09-04 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-9933:


 Summary: Augment Service model to support starting stopped services
 Key: HADOOP-9933
 URL: https://issues.apache.org/jira/browse/HADOOP-9933
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


For ResourceManager-HA (YARN-149 and co), we would want to start/stop/start 
RM's active services as it transitions to Active/Standby/Active respectively. 
In the current service model, we can't start the services that are already 
stopped.

Would be nice to augment this. To avoid accidental restart of stopped services, 
we can add another API: start(boolean restartIfStopped). Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9934) the generic (non x86) libhadoop CRC code makes unaligned loads and stores

2013-09-04 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9934:


 Summary: the generic (non x86) libhadoop CRC code makes unaligned 
loads and stores
 Key: HADOOP-9934
 URL: https://issues.apache.org/jira/browse/HADOOP-9934
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe
Priority: Minor


The generic (non-x86) libhadoop CRC code makes unaligned loads and stores.  
Some architectures don't support unaligned loads and stores, or they are slow.  
This code should be made truly CPU-neutral by removing unaligned access.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: New dev. environment issue

2013-09-04 Thread Tsuyoshi OZAWA
Hi John,

I'm developing Hadoop on OS X(10.8.4). Sometimes tests fail on OS X,
though I confirmed that TestFileUtil of current trunk works well on my
environment. Jenkins CI of Hadoop runs on Linux, so I'll recommend you
to prepare Linux server. I also avoid the problem by preparing linux
servers on virtual machines.

Thanks,
Tsuyoshi

On Wed, Sep 4, 2013 at 1:27 AM, John Chilton  wrote:
> Hi Matt,
>
> I was hoping to be able to test at some point because I would like to
> contribute.
>
> John
>
> On Tue, September 3, 2013 12:10 am, Matt Fellows wrote:
>> You could run:
>> mvn install -DskipTests if you are just looking to build it, and don't
>> care
>> about the tests?
>>
>>
>> On Tue, Sep 3, 2013 at 2:46 AM, John Chilton  wrote:
>>
>>> Hi all, I am trying to set up a new development environment on a Mac (OS
>>> X
>>> 10.8.4).
>>>
>>> I have checked out the source using NetBeans builtin SVN support and I
>>> have installed ProtoBuf 2.5.0.
>>>
>>> Now I am running "mvn clean" and "mvn install". "mvn install" fails,
>>> listing some failures and one error.
>>>
>>> I am wondering if someone can help me sort this out. Some of the output
>>> of
>>> "mvn install" is below (note the "Tests in error" section too).
>>>
>>> Thanks,
>>>
>>> John Chilton
>>>
>>>
>>>
>>> Results :
>>>
>>> Failed tests:   testFailFullyDelete(org.apache.hadoop.fs.TestFileUtil):
>>> The directory xSubDir *should* not have been deleted. expected:
>>> but
>>> was:
>>>   testFailFullyDeleteContents(org.apache.hadoop.fs.TestFileUtil): The
>>> directory xSubDir *should* not have been deleted. expected: but
>>> was:
>>>
>>> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem):
>>> Should throw IOException
>>>   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for
>>> build/test/temp/RELATIVE1 in
>>> build/test/temp/RELATIVE0/block2054571533301913960.tmp - FAILED!
>>>
>>> testROBufferDirAndRWBufferDir[0](org.apache.hadoop.fs.TestLocalDirAllocator):
>>> Checking for build/test/temp/RELATIVE2 in
>>> build/test/temp/RELATIVE1/block3744072557851527092.tmp - FAILED!
>>>   testRWBufferDirBecomesRO[0](org.apache.hadoop.fs.TestLocalDirAllocator):
>>> Checking for build/test/temp/RELATIVE3 in
>>> build/test/temp/RELATIVE4/block2732071443989594601.tmp - FAILED!
>>>   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for
>>>
>>> /Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
>>> in
>>>
>>> /Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block446501941002298981.tmp
>>> - FAILED!
>>>
>>> testROBufferDirAndRWBufferDir[1](org.apache.hadoop.fs.TestLocalDirAllocator):
>>> Checking for
>>>
>>> /Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
>>> in
>>>
>>> /Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block260253564478200923.tmp
>>> - FAILED!
>>>   testRWBufferDirBecomesRO[1](org.apache.hadoop.fs.TestLocalDirAllocator):
>>> Checking for
>>>
>>> /Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE3
>>> in
>>>
>>> /Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE4/block6257562622887355091.tmp
>>> - FAILED!
>>>   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for
>>>
>>> file:/Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
>>> in
>>>
>>> /Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block1484460518022294080.tmp
>>> - FAILED!
>>>
>>> testROBufferDirAndRWBufferDir[2](org.apache.hadoop.fs.TestLocalDirAllocator):
>>> Checking for
>>>
>>> file:/Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
>>> in
>>>
>>> /Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block6591077298608887273.tmp
>>> - FAILED!
>>>   testRWBufferDirBecomesRO[2](org.apache.hadoop.fs.TestLocalDirAllocator):
>>> Checking for
>>>
>>> file:/Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED3
>>> in
>>>
>>> /Users/John/NetBeansProjects/hadoop/trunk/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED4/block3348932985968175704.tmp
>>> - FAILED!
>>>   testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem)
>>>   testDanglingLink(org.apache.hadoop.fs.TestSymlinkLocalFSFileContext):
>>> expected:<[root]> but was:<[]>
>>>   testDanglingLink(org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem):
>>> expected:<[root]> but was:<[]>
>>>
>>> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem):
>>> Should throw IOException
>>>   testTrash(org.apache.had

Re: [DISCUSS] Hadoop SSO/Token Server Components

2013-09-04 Thread Suresh Srinivas
> One aside: if you come across a bug, please try to fix it upstream and
> then merge into the feature branch rather than cherry-picking patches
> or only fixing it on the branch. It becomes very awkward to track. -C


Related to this, when refactoring the code, generally required for large
feature development, consider first refactoring in trunk and then make
additional changes for the feature in the feature branch. This helps a lot
in being able to merge the trunk to feature branch periodically. This will
also help in keeping the change for merging feature to trunk small and
easier reviews.

Regards,
Suresh

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.