[jira] [Created] (HDFS-10736) Format disk balance command's output info

2016-08-09 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created HDFS-10736:
-

 Summary: Format disk balance command's output info
 Key: HDFS-10736
 URL: https://issues.apache.org/jira/browse/HDFS-10736
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yuanbo Liu
Assignee: Yuanbo Liu


When users run command of disk balance as below
{quote}
hdfs diskbalancer
{quote}
it doesn't print the detail information of options.
Also when users run disk balance command in a wrong way, the output info is not 
consistent with other commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10737) disk balance reporter print null for the volume's path

2016-08-09 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created HDFS-10737:
-

 Summary: disk balance reporter print null for the volume's path
 Key: HDFS-10737
 URL: https://issues.apache.org/jira/browse/HDFS-10737
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer, hdfs
Reporter: Yuanbo Liu
Assignee: Yuanbo Liu


reproduction steps:
1. hdfs diskbalancer -plan xxx.xx(host name of datanode)
2. If plan json is created successfully, run
hdfs diskbalancer -report  xxx.xx
the output info is here:
{noformat}
[DISK: volume-null] - 0.00 used: 45997/101122146304, 1.00 free: 
101122100307/101122146304, isFailed: False, isReadOnly: False, isSkip: False, 
isTransient: False.
{noformat}
{{vol.getPath()}} returns null in {{ReportCommand#handleTopReport}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Release numbering semantics with concurrent (>2) releases [Was Setting JIRA fix versions for 3.0.0 releases]

2016-08-09 Thread Karthik Kambatla
Most people I talked to found 3.0.0-alpha, 3.1.0-alpha/beta confusing. I am
not aware of any other software shipped that way. While being used by other
software does not make an approach right, I think we should adopt an
approach that is easy for our users to understand.

The notion of 3.0.0-alphaX and 3.0.0-betaX ending in 3.0.0 (GA) has been
proposed and okay for a long while. Do people still consider it okay? Is
there a specific need to consider alternatives?

On Mon, Aug 8, 2016 at 11:44 AM, Junping Du  wrote:

> I think that incompatible API between 3.0.0-alpha and 3.1.0-beta is
> something less confusing than incompatible between 2.8/2.9 and 2.98.x
> alphas/2.99.x betas.
> Why not just follow our previous practice in the beginning of branch-2? we
> can have 3.0.0-alpha, 3.1.0-alpha/beta, but once when we are finalizing our
> APIs, we should bump up trunk version to 4.x for landing new incompatible
> changes.
>
> Thanks,
>
> Junping
> 
> From: Karthik Kambatla 
> Sent: Monday, August 08, 2016 6:54 PM
> Cc: common-...@hadoop.apache.org; yarn-...@hadoop.apache.org;
> hdfs-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> Subject: Re: [DISCUSS] Release numbering semantics with concurrent (>2)
> releases [Was Setting JIRA fix versions for 3.0.0 releases]
>
> I like the 3.0.0-alphaX approach primarily for simpler understanding of
> compatibility guarantees. Calling 3.0.0 alpha and 3.1.0 beta is confusing
> because, it is not immediately clear that 3.0.0 and 3.1.0 could be
> incompatible in APIs.
>
> I am open to something like 2.98.x for alphas and 2.99.x for betas leading
> to a 3.0.0 GA. I have seen other projects use this without causing much
> confusion.
>
> On Thu, Aug 4, 2016 at 6:01 PM, Konstantin Shvachko 
> wrote:
>
> > On Thu, Aug 4, 2016 at 11:20 AM, Andrew Wang 
> > wrote:
> >
> > > Hi Konst, thanks for commenting,
> > >
> > > On Wed, Aug 3, 2016 at 11:29 PM, Konstantin Shvachko <
> > shv.had...@gmail.com
> > > > wrote:
> > >
> > >> 1. I probably missed something but I didn't get it how "alpha"s made
> > >> their way into release numbers again. This was discussed on several
> > >> occasions and I thought the common perception was to use just three
> > level
> > >> numbers for release versioning and avoid branding them.
> > >> It is particularly confusing to have 3.0.0-alpha1 and 3.0.0-alpha2.
> What
> > >> is alphaX - fourth level? I think releasing 3.0.0 and setting trunk to
> > >> 3.1.0 would be perfectly in line with our current release practices.
> > >>
> > >
> > > We discussed release numbering a while ago when discussing the release
> > > plan for 3.0.0, and agreed on this scheme. "-alphaX" is essentially a
> > > fourth level as you say, but the intent is to only use it (and
> "-betaX")
> > in
> > > the leadup to 3.0.0.
> > >
> > > The goal here is clarity for end users, since most other enterprise
> > > software uses a a.0.0 version to denote the GA of a new major version.
> > Same
> > > for a.b.0 for a new minor version, though we haven't talked about that
> > yet.
> > > The alphaX and betaX scheme also shares similarity to release
> versioning
> > of
> > > other enterprise software.
> > >
> >
> > As you remember we did this (alpha, beta) for Hadoop-2 and I don't think
> it
> > went well with user perception.
> > Say release 2.0.5-alpha turned out to be quite good even though still
> > branded "alpha", while 2.2 was not and not branded.
> > We should move a release to stable, when people ran it and agree it is GA
> > worthy. Otherwise you never know.
> >
> >
> > >
> > >> 2. I do not see any confusions with releasing 2.8.0 after 3.0.0.
> > >> The release number is not intended to reflect historical release
> > >> sequence, but rather the point in the source tree, which it was
> branched
> > >> off. So one can release 2.8, 2.9, etc. after or before 3.0.
> > >>
> > >
> > > As described earlier in this thread, the issue here is setting the fix
> > > versions such that the changelog is a useful diff from a previous
> > version,
> > > and also clear about what changes are present in each branch. If we do
> > not
> > > order a specific 2.x before 3.0, then we don't know what 2.x to diff
> > from.
> > >
> >
> > So the problem is in determining the latest commit, which was not present
> > in the last release, when the last release bears higher number than the
> one
> > being released.
> > Interesting problem. Don't have a strong opinion on that. I guess it's OK
> > to have overlapping in changelogs.
> > As long as we keep following the rule that commits should be made to
> trunk
> > first and them propagated to lower branches until the target branch is
> > reached.
> >
> >
> > >
> > >> 3. I agree that current 3.0.0 branch can be dropped and re-cut. We may
> > >> think of another rule that if a release branch is not released in 3
> > month
> > >> it should be abandoned. Which is applicable to branch 2.8.0 and it is
> > too
> > >> much work syncing it

Re: [DISCUSS] Release numbering semantics with concurrent (>2) releases [Was Setting JIRA fix versions for 3.0.0 releases]

2016-08-09 Thread Karthik Kambatla
Another reason I like the 3.0.0-alphaX approach is the ease of
communicating compatibility guarantees.

A lot of our compatibility guarantees (e.g. API/wire compat) mention
"within a major release". For the user, thinking of 3.0.0 as the beginning
of a major release seems easier than 3.2.0 being the beginning. Most users
likely will not be interested in the alphas or betas; I assume downstream
projects and early adopters are the primary targets for these pre-GA
releases.

By capturing what we mean by alpha and beta, we can communicate the
compatibility guarantees moving from alpha1 to alphaX to betaX to GA; this
applies to both the Hadoop-2 model and the 3.0.0-alphaX model.

On Tue, Aug 9, 2016 at 6:02 AM, Karthik Kambatla  wrote:

> Most people I talked to found 3.0.0-alpha, 3.1.0-alpha/beta confusing. I
> am not aware of any other software shipped that way. While being used by
> other software does not make an approach right, I think we should adopt an
> approach that is easy for our users to understand.
>
> The notion of 3.0.0-alphaX and 3.0.0-betaX ending in 3.0.0 (GA) has been
> proposed and okay for a long while. Do people still consider it okay? Is
> there a specific need to consider alternatives?
>
> On Mon, Aug 8, 2016 at 11:44 AM, Junping Du  wrote:
>
>> I think that incompatible API between 3.0.0-alpha and 3.1.0-beta is
>> something less confusing than incompatible between 2.8/2.9 and 2.98.x
>> alphas/2.99.x betas.
>> Why not just follow our previous practice in the beginning of branch-2?
>> we can have 3.0.0-alpha, 3.1.0-alpha/beta, but once when we are finalizing
>> our APIs, we should bump up trunk version to 4.x for landing new
>> incompatible changes.
>>
>> Thanks,
>>
>> Junping
>> 
>> From: Karthik Kambatla 
>> Sent: Monday, August 08, 2016 6:54 PM
>> Cc: common-...@hadoop.apache.org; yarn-...@hadoop.apache.org;
>> hdfs-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org
>> Subject: Re: [DISCUSS] Release numbering semantics with concurrent (>2)
>> releases [Was Setting JIRA fix versions for 3.0.0 releases]
>>
>> I like the 3.0.0-alphaX approach primarily for simpler understanding of
>> compatibility guarantees. Calling 3.0.0 alpha and 3.1.0 beta is confusing
>> because, it is not immediately clear that 3.0.0 and 3.1.0 could be
>> incompatible in APIs.
>>
>> I am open to something like 2.98.x for alphas and 2.99.x for betas leading
>> to a 3.0.0 GA. I have seen other projects use this without causing much
>> confusion.
>>
>> On Thu, Aug 4, 2016 at 6:01 PM, Konstantin Shvachko > >
>> wrote:
>>
>> > On Thu, Aug 4, 2016 at 11:20 AM, Andrew Wang 
>> > wrote:
>> >
>> > > Hi Konst, thanks for commenting,
>> > >
>> > > On Wed, Aug 3, 2016 at 11:29 PM, Konstantin Shvachko <
>> > shv.had...@gmail.com
>> > > > wrote:
>> > >
>> > >> 1. I probably missed something but I didn't get it how "alpha"s made
>> > >> their way into release numbers again. This was discussed on several
>> > >> occasions and I thought the common perception was to use just three
>> > level
>> > >> numbers for release versioning and avoid branding them.
>> > >> It is particularly confusing to have 3.0.0-alpha1 and 3.0.0-alpha2.
>> What
>> > >> is alphaX - fourth level? I think releasing 3.0.0 and setting trunk
>> to
>> > >> 3.1.0 would be perfectly in line with our current release practices.
>> > >>
>> > >
>> > > We discussed release numbering a while ago when discussing the release
>> > > plan for 3.0.0, and agreed on this scheme. "-alphaX" is essentially a
>> > > fourth level as you say, but the intent is to only use it (and
>> "-betaX")
>> > in
>> > > the leadup to 3.0.0.
>> > >
>> > > The goal here is clarity for end users, since most other enterprise
>> > > software uses a a.0.0 version to denote the GA of a new major version.
>> > Same
>> > > for a.b.0 for a new minor version, though we haven't talked about that
>> > yet.
>> > > The alphaX and betaX scheme also shares similarity to release
>> versioning
>> > of
>> > > other enterprise software.
>> > >
>> >
>> > As you remember we did this (alpha, beta) for Hadoop-2 and I don't
>> think it
>> > went well with user perception.
>> > Say release 2.0.5-alpha turned out to be quite good even though still
>> > branded "alpha", while 2.2 was not and not branded.
>> > We should move a release to stable, when people ran it and agree it is
>> GA
>> > worthy. Otherwise you never know.
>> >
>> >
>> > >
>> > >> 2. I do not see any confusions with releasing 2.8.0 after 3.0.0.
>> > >> The release number is not intended to reflect historical release
>> > >> sequence, but rather the point in the source tree, which it was
>> branched
>> > >> off. So one can release 2.8, 2.9, etc. after or before 3.0.
>> > >>
>> > >
>> > > As described earlier in this thread, the issue here is setting the fix
>> > > versions such that the changelog is a useful diff from a previous
>> > version,
>> > > and also clear about what changes are present in each branch. If we d

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/

[Aug 8, 2016 4:19:54 PM] (varunsaxena) MAPREDUCE-6748. Enhance logging for 
Cluster.java around
[Aug 8, 2016 4:42:53 PM] (varunsaxena) YARN-4910. Fix incomplete log info in 
ResourceLocalizationService (Jun
[Aug 8, 2016 6:00:19 PM] (jitendra) HADOOP-10823. TestReloadingX509TrustManager 
is flaky. Contributed by
[Aug 8, 2016 7:02:53 PM] (arp) HADOOP-10682. Replace FsDatasetImpl object lock 
with a separate lock
[Aug 8, 2016 7:28:40 PM] (cnauroth) HADOOP-13403. AzureNativeFileSystem 
rename/delete performance
[Aug 8, 2016 7:36:27 PM] (arp) HADOOP-13457. Remove hardcoded absolute path for 
shell executable. (Chen
[Aug 8, 2016 9:28:07 PM] (vinodkv) YARN-5470. Addedum to differentiate exactly 
matching of log-files with
[Aug 8, 2016 10:06:03 PM] (lei) HADOOP-13380. TestBasicDiskValidator should not 
write data to /tmp
[Aug 8, 2016 10:11:05 PM] (weichiu) HADOOP-13395. Enhance TestKMSAudit. 
Contributed by Xiao Chen.
[Aug 9, 2016 12:34:56 AM] (sjlee) HADOOP-12747. support wildcard in libjars 
argument (sjlee)
[Aug 9, 2016 12:54:44 AM] (iwasakims) HADOOP-13439. Fix race between 
TestMetricsSystemImpl and




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 
   hadoop.tracing.TestTracing 
   hadoop.security.TestRefreshUserMappings 
   hadoop.yarn.logaggregation.TestAggregatedLogFormat 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestYarnClient 
   hadoop.mapreduce.v2.hs.server.TestHSAdminServer 
   hadoop.mapreduce.TestMRJobClient 
   hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/diff-compile-javac-root.txt
  [172K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [312K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/128/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [92K]
   
https://builds.apache.org/j

[jira] [Created] (HDFS-10739) libhdfs++: In RPC engine replace vector with deque for pending requests

2016-08-09 Thread Anatoli Shein (JIRA)
Anatoli Shein created HDFS-10739:


 Summary: libhdfs++: In RPC engine replace vector with deque for 
pending requests
 Key: HDFS-10739
 URL: https://issues.apache.org/jira/browse/HDFS-10739
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anatoli Shein


Needs to be added in order to improve performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10740) libhdfs++: Implement recursive directory generator

2016-08-09 Thread Anatoli Shein (JIRA)
Anatoli Shein created HDFS-10740:


 Summary: libhdfs++: Implement recursive directory generator
 Key: HDFS-10740
 URL: https://issues.apache.org/jira/browse/HDFS-10740
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anatoli Shein


This tool will allow us do benchmarking/testing our find functionality, and 
will be a good example showing how to call a large number or namenode 
operations reqursively.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10741) TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration fails consistently.

2016-08-09 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-10741:
-

 Summary: 
TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration fails 
consistently.
 Key: HDFS-10741
 URL: https://issues.apache.org/jira/browse/HDFS-10741
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rushabh S Shah


Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.783 sec <<< 
FAILURE! - in org.apache.hadoop.security.TestRefreshUserMappings
testRefreshSuperUserGroupsConfiguration(org.apache.hadoop.security.TestRefreshUserMappings)
  Time elapsed: 3.942 sec  <<< FAILURE!
java.lang.AssertionError: first auth for user2 should've succeeded: User: 
super_userL is not allowed to impersonate userL2
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.security.TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration(TestRefreshUserMappings.java:200)


Results :

Failed tests: 
  TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration:200 first 
auth for user2 should've succeeded: User: super_userL is not allowed to 
impersonate userL2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10741) TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration fails consistently.

2016-08-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-10741.

Resolution: Duplicate

Looks like it's a dup of HDFS-10738 where a patch is uploaded. Let's move over 
and review that patch.

Thanks!

> TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration fails 
> consistently.
> ---
>
> Key: HDFS-10741
> URL: https://issues.apache.org/jira/browse/HDFS-10741
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rushabh S Shah
>
> Following test is failing consistently in trunk.
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.783 sec <<< 
> FAILURE! - in org.apache.hadoop.security.TestRefreshUserMappings
> testRefreshSuperUserGroupsConfiguration(org.apache.hadoop.security.TestRefreshUserMappings)
>   Time elapsed: 3.942 sec  <<< FAILURE!
> java.lang.AssertionError: first auth for user2 should've succeeded: User: 
> super_userL is not allowed to impersonate userL2
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.security.TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration(TestRefreshUserMappings.java:200)
> Results :
> Failed tests: 
>   TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration:200 first 
> auth for user2 should've succeeded: User: super_userL is not allowed to 
> impersonate userL2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10742) Measurement of lock held time in FsDatasetImpl

2016-08-09 Thread Chen Liang (JIRA)
Chen Liang created HDFS-10742:
-

 Summary: Measurement of lock held time in FsDatasetImpl
 Key: HDFS-10742
 URL: https://issues.apache.org/jira/browse/HDFS-10742
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0-alpha2
Reporter: Chen Liang
Assignee: Chen Liang


This JIRA proposes to measure the time the of lock of {{FsDatasetImpl}} is held 
by a thread. Doing so will allow us to measure lock statistics.

This can be done by extending the {{AutoCloseableLock}} lock object in 
{{FsDatasetImpl}}. In the future we can also consider replacing the lock with a 
read-write lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[Release thread] 2.6.5 release activities

2016-08-09 Thread Chris Trezzo
Based on the sentiment in the "[DISCUSS] 2.6.x line releases" thread, I
have moved forward with some of the initial effort in creating a 2.6.5
release. I am forking this thread so we have a dedicated 2.6.5 release
thread.

I have gone through the git logs and gathered a list of JIRAs that are in
branch-2.7 but are missing from branch-2.6. I limited the diff to issues
with a commit date after 1/26/2016. I did this because 2.6.4 was cut from
branch-2.6 around that date (http://markmail.org/message/xmy7ebs6l3643o5e)
and presumably issues that were committed to branch-2.7 before then were
already looked at as part of 2.6.4.

I have collected these issues in a spreadsheet and have given them an
initial triage on whether they are candidates for a backport to 2.6.5. The
spreadsheet is sorted by the status of the issues with the potential
backport candidates at the top. Here is a link to the spreadsheet:
https://docs.google.com/spreadsheets/d/1lfG2CYQ7W4q3olWpOCo6EBAey1WYC8hTRUemHvYPPzY/edit?usp=sharing

As of now, I have identified 16 potential backport candidates. Please take
a look at the list and let me know if there are any that you think should
not be on the list, or ones that you think I have missed. This was just an
initial high-level triage, so there could definitely be issues that are
miss-labeled.

As a side note: we still need to look at the pre-commit build for 2.6 and
follow up with an addendum for HADOOP-12800.

Thanks everyone!
Chris Trezzo