Re: [DISCUSS] Hadoop 2.10.1 release

2020-09-01 Thread Masatake Iwasaki

Thanks, Mingliang Liu.

I volunteer to take the RM role then.
I will appreciate advice from who have the experience.

Masatake Iwasaki

On 2020/09/01 10:38, Mingliang Liu wrote:

I can see how I can help, but I can not take the RM role this time.

Thanks,

On Mon, Aug 31, 2020 at 12:15 PM Wei-Chiu Chuang
 wrote:


Hello,

I see that Masatake graciously agreed to volunteer with the Hadoop 2.10.1
release work in the 2.9 branch EOL discussion thread
https://s.apache.org/hadoop2.9eold

Anyone else likes to contribute also?

Thanks





-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] End of Life Hadoop 2.9

2020-09-01 Thread Stephen O'Donnell
+1

Thanks,

Stephen.


On Tue, Sep 1, 2020 at 7:25 AM Masatake Iwasaki 
wrote:

> +1
>
> Thanks,
> Masatake Iwasaki
>
> On 2020/09/01 4:09, Wei-Chiu Chuang wrote:
> > Dear fellow Hadoop developers,
> >
> > Given the overwhelming feedback from the discussion thread
> > https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
> > thread for the community to vote and start the 2.9 EOL process.
> >
> > What this entails:
> >
> > (1) an official announcement that no further regular Hadoop 2.9.x
> releases
> > will be made after 2.9.2 (which was GA on 11/19/2019)
> > (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
> >
> >
> > This vote will run for 7 days and will conclude by September 7th, 12:00pm
> > pacific time.
> > Committers are eligible to cast binding votes. Non-committers are
> welcomed
> > to cast non-binding votes.
> >
> > Here is my vote, +1
> >
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] End of Life Hadoop 2.9

2020-09-01 Thread Adam Antal
+1

Thanks,
Adam

On Tue, Sep 1, 2020 at 9:18 AM Stephen O'Donnell
 wrote:

> +1
>
> Thanks,
>
> Stephen.
>
>
> On Tue, Sep 1, 2020 at 7:25 AM Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp>
> wrote:
>
> > +1
> >
> > Thanks,
> > Masatake Iwasaki
> >
> > On 2020/09/01 4:09, Wei-Chiu Chuang wrote:
> > > Dear fellow Hadoop developers,
> > >
> > > Given the overwhelming feedback from the discussion thread
> > > https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
> > > thread for the community to vote and start the 2.9 EOL process.
> > >
> > > What this entails:
> > >
> > > (1) an official announcement that no further regular Hadoop 2.9.x
> > releases
> > > will be made after 2.9.2 (which was GA on 11/19/2019)
> > > (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
> > >
> > >
> > > This vote will run for 7 days and will conclude by September 7th,
> 12:00pm
> > > pacific time.
> > > Committers are eligible to cast binding votes. Non-committers are
> > welcomed
> > > to cast non-binding votes.
> > >
> > > Here is my vote, +1
> > >
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


Re: [VOTE] End of Life Hadoop 2.9

2020-09-01 Thread 孙立晟
+1
Thanks,
Lisheng Sun

Adam Antal  于2020年9月1日周二 下午4:24写道:

> +1
>
> Thanks,
> Adam
>
> On Tue, Sep 1, 2020 at 9:18 AM Stephen O'Donnell
>  wrote:
>
> > +1
> >
> > Thanks,
> >
> > Stephen.
> >
> >
> > On Tue, Sep 1, 2020 at 7:25 AM Masatake Iwasaki <
> > iwasak...@oss.nttdata.co.jp>
> > wrote:
> >
> > > +1
> > >
> > > Thanks,
> > > Masatake Iwasaki
> > >
> > > On 2020/09/01 4:09, Wei-Chiu Chuang wrote:
> > > > Dear fellow Hadoop developers,
> > > >
> > > > Given the overwhelming feedback from the discussion thread
> > > > https://s.apache.org/hadoop2.9eold, I'd like to start an official
> vote
> > > > thread for the community to vote and start the 2.9 EOL process.
> > > >
> > > > What this entails:
> > > >
> > > > (1) an official announcement that no further regular Hadoop 2.9.x
> > > releases
> > > > will be made after 2.9.2 (which was GA on 11/19/2019)
> > > > (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
> > > >
> > > >
> > > > This vote will run for 7 days and will conclude by September 7th,
> > 12:00pm
> > > > pacific time.
> > > > Committers are eligible to cast binding votes. Non-committers are
> > > welcomed
> > > > to cast non-binding votes.
> > > >
> > > > Here is my vote, +1
> > > >
> > >
> > > -
> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


[jira] [Created] (HDFS-15551) Tiny Improve for DeadNode detector

2020-09-01 Thread dark_num (Jira)
dark_num created HDFS-15551:
---

 Summary: Tiny Improve for DeadNode detector
 Key: HDFS-15551
 URL: https://issues.apache.org/jira/browse/HDFS-15551
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.3.0
Reporter: dark_num
 Fix For: 3.4.0


# add or improve some logs for adding local & global deadnodes
 # logic improve
 # fix typo



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-09-01 Thread Ayush Saxena
+1
* Built from source
* Verified checksums & signature
* Ran some basic shell commands.

Thanx Sammi for driving the release. Good Luck!!!

-Ayush

On Tue, 1 Sep 2020 at 13:20, Mukul Kumar Singh 
wrote:

> Thanks for preparing the RC Sammi.
>
> +1 (binding)
>
> 1. Verified Signatures
>
> 2. Compiled the source
>
> 3. Local Docker based deployed clsuter and some basic commands.
>
> Thanks,
>
> Mukul
>
> On 01/09/20 12:07 pm, Rakesh Radhakrishnan wrote:
> > Thanks Sammi for getting this out!
> >
> > +1 (binding)
> >
> >   * Verified signatures.
> >   * Built from source.
> >   * Deployed small non-HA un-secure cluster.
> >   * Verified basic Ozone file system.
> >   * Tried out a few basic Ozone shell commands - create, list, delete
> >   * Ran a few Freon benchmark tests.
> >
> > Thanks,
> > Rakesh
> >
> > On Tue, Sep 1, 2020 at 11:53 AM Jitendra Pandey
> >  wrote:
> >
> >> +1 (binding)
> >>
> >> 1. Verified signatures
> >> 2. Built from source
> >> 3. deployed with docker
> >> 4. tested with basic s3 apis.
> >>
> >> On Tue, Aug 25, 2020 at 7:01 AM Sammi Chen 
> wrote:
> >>
> >>> RC1 artifacts are at:
> >>> https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> >>> 
> >>>
> >>> Maven artifacts are staged at:
> >>>
> https://repository.apache.org/content/repositories/orgapachehadoop-1278
> >>> <
> https://repository.apache.org/content/repositories/orgapachehadoop-1277
> >>>
> >>>
> >>> The public key used for signing the artifacts can be found at:
> >>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >>>
> >>> The RC1 tag in github is at:
> >>> https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> >>> 
> >>>
> >>> Change log of RC1, add
> >>> 1. HDDS-4063. Fix InstallSnapshot in OM HA
> >>> 2. HDDS-4139. Update version number in upgrade tests.
> >>> 3. HDDS-4144, Update version info in hadoop client dependency readme
> >>>
> >>> *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm
> PST.*
> >>>
> >>> Thanks,
> >>> Sammi Chen
> >>>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDFS-15552) Let DeadNode Detector also work for EC cases

2020-09-01 Thread dark_num (Jira)
dark_num created HDFS-15552:
---

 Summary: Let DeadNode Detector also work for EC cases
 Key: HDFS-15552
 URL: https://issues.apache.org/jira/browse/HDFS-15552
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: dfsclient, ec
Affects Versions: 3.3.0
Reporter: dark_num
 Fix For: 3.4.0


Currently, the EC stream (`DFSStripedInputStream`) is not handled properly 
while exception occurs.

For example, while reading EC-blocks, if the client timed out when connecting 
to the DataNode, it will throws `SocketTimeoutException` , then add current DN 
to localDeadNode.

However, the local dead nodes will not be removed until the stream is closed, 
which will cause *missing block IOException* to be thrown in the use scenario 
of Hbase.

So we need to use detector to deal with dead nodes under EC to avoid reading 
failures.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-09-01 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/

[Aug 31, 2020 4:52:29 AM] (noreply) YARN-10358. Fix findbugs warnings in 
hadoop-yarn-project on branch-2.10. (#2164)




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.TestDecommission 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/diff-compile-javac-root.txt
  [428K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [208K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [264K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/43/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server

Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-09-01 Thread Sammi Chen
+1

  -   Verified ozone version of binary package

   -  Verified ozone source package content with ozone-1.0.0-RC1 tag

   -  Build ozone from source package

   -  Deployed a new 1+3 cluster using RC1 binary package

   -  Checked Ozone UI, SCM UI, Datanode UI and Recon UI

   -  Run TestDFSIO write/read with Hadoop 2.7.5

   -  Verified basic o3fs operations, upload and download file


Thanks,
Sammi

On Tue, Aug 25, 2020 at 10:01 PM Sammi Chen  wrote:

>
> RC1 artifacts are at:
> https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> 
>
> Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1278
> 
>
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The RC1 tag in github is at:
> https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> 
>
> Change log of RC1, add
> 1. HDDS-4063. Fix InstallSnapshot in OM HA
> 2. HDDS-4139. Update version number in upgrade tests.
> 3. HDDS-4144, Update version info in hadoop client dependency readme
>
> *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm PST.*
>
> Thanks,
> Sammi Chen
>


Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-09-01 Thread Sammi Chen
Hi All,

The voting has ended with:
12binding +1 (including me)
3 non-binding +1
0 -1
0 0

With above data, Ozone 1.0.0 RC1 voting is passed.
Thank you all for your verifying and voting effort.

I will proceed with releasing the artifacts and announce the release after
that.


Bests,
Sammi

On Tue, Aug 25, 2020 at 10:01 PM Sammi Chen  wrote:

>
> RC1 artifacts are at:
> https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> 
>
> Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1278
> 
>
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The RC1 tag in github is at:
> https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> 
>
> Change log of RC1, add
> 1. HDDS-4063. Fix InstallSnapshot in OM HA
> 2. HDDS-4139. Update version number in upgrade tests.
> 3. HDDS-4144, Update version info in hadoop client dependency readme
>
> *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm PST.*
>
> Thanks,
> Sammi Chen
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-09-01 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/

[Aug 31, 2020 5:59:48 AM] (noreply) HDFS-15542. Add identified snapshot 
corruption tests for ordered snapshot deletion (#2251)
[Aug 31, 2020 6:49:12 AM] (Xiaoqiao He) HDFS-15550. Remove unused imports from 
TestFileTruncate.java. Contributed by Ravuri Sushma sree.
[Aug 31, 2020 2:00:39 PM] (Szilard Nemeth) [UI1] Provide a way to hide Tools 
section in Web UIv1. Contributed by Andras Gyori




-1 overall


The following subsystems voted -1:
asflicense pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.TestGetFileChecksum 
   hadoop.hdfs.TestDecommission 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier 
   hadoop.hdfs.server.federation.router.TestRouterRpc 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/whitespace-tabs.txt
  [1.9M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/diff-javadoc-javadoc-root.txt
  [1.3M]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [548K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [448K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [16K]

   asflicense:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/252/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.12.0   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h..

Unstable Unit Tests in Trunk

2020-09-01 Thread Eric Badger
While putting up patches for HADOOP-17169
 I noticed that the
unit tests in trunk, specifically in HDFS, are incredibly unstable. Every
time I put up a new patch, 4-8 unit tests failed with failures that were
completely unrelated to the patch. I'm pretty confident in that since the
patch is simply changing variable names. I also ran the unit tests locally
and they would pass (or fail intermittently).

Is there an effort to stabilize the unit tests? I don't know if these are
bugs or if they're bad tests. But in either case, it's bad for the
stability of the project.

Eric


[jira] [Created] (HDFS-15554) RBF: force router check file existence before adding/updating mount points

2020-09-01 Thread Fengnan Li (Jira)
Fengnan Li created HDFS-15554:
-

 Summary: RBF: force router check file existence before 
adding/updating mount points
 Key: HDFS-15554
 URL: https://issues.apache.org/jira/browse/HDFS-15554
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Fengnan Li
Assignee: Fengnan Li


Adding/Updating mount points right now is only a router action without 
validation in the downstream namenodes for the destination files/directories.

In practice we have set up the dangling mount points and when clients call 
listStatus they would get the file returned, but then if they try to access the 
file FileNotFoundException would be thrown out.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[DISCUSS] Hadoop 3.2.2 release

2020-09-01 Thread Wei-Chiu Chuang
Hi folks,

I was reminded by Xiaoqiao that Hadoop 3.2.1 was made almost a year ago
(released on September 22, 2019) and we're overdue for a follow-up.

@Rohith Sharma K S   you were the RM for Hadoop
3.2.1. Are we planning to make the 3.2.2 releases soon? Xiaoqiao wants to
help with the release.

Thanks,
Weichiu


[jira] [Created] (HDFS-15555) RBF: Refresh cacheNS when SocketException occurs

2020-09-01 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-1:


 Summary: RBF: Refresh cacheNS when SocketException occurs
 Key: HDFS-1
 URL: https://issues.apache.org/jira/browse/HDFS-1
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka


Problem:
When active NameNode is restarted and loading fsimage, DFSRouters significantly 
slow down.

Investigation:
When active NameNode is restarted and loading fsimage, RouterRpcClient receives 
SocketException. Since RouterRpcClient#isUnavailableException(IOException) 
returns false when the argument is SocketException, the 
MembershipNameNodeResolver#cacheNS is not refreshed. That's why the order of 
the NameNodes returned by 
MemberShipNameNodeResolver#getNamenodesForNameserviceId(String) is unchanged 
and the active NameNode is still returned first. Therefore RouterRpcClient 
still tries to connect to the NameNode that is loading fsimage.

After loading the fsimage, the NameNode throws StandbyException. The exception 
is one of the 'Unavailable Exception' and the cacheNS is refreshed.

Workaround:
Stop NameNode and wait 1 minute before starting NameNode instead of restarting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Hadoop & ApacheCon

2020-09-01 Thread Wei-Chiu Chuang
Hello,

This year's ApacheCon will take place online between 9/29 and 10/1. There
are lots of sessions made by our fellow Hadoop developers:

https://apachecon.com/acah2020/tracks/bigdata-1.html
https://apachecon.com/acah2020/tracks/bigdata-2.html

In case you didn't realize, the registration is free, so be sure to check
them out!

Some of the talks that are closely related to Hadoop:

Apache Hadoop YARN: Past, Now and Future
Szilard Nemeth, Sunil Govindan

Hadoop Storage Reloaded: the 5 lessons Ozone learned from HDFS
Márton Elek

GDPR’s Right to be Forgotten in Apache Hadoop Ozone
Dinesh Chitlangia

Global File System View Across all Hadoop Compatible File Systems with the
LightWeight Client Side Mount Points.
Uma Maheswara Rao Gangumalla

Apache Hadoop YARN fs2cs: Converting Fair Scheduler to Capacity Scheduler
Peter Bacsko

HDFS Migration from 2.7 to 3.3 and enabling Router Based Federation (RBF)
in production
Akira Ajisaka

Stepping towards Bigdata on ARM
Vinayakumar B, Liu Sheng

I am sure I missed out others since I only looked at the Big Data tracks.
Feel free to add more if you want to promote your talk :)

Cheers
Weichiu