[jira] [Created] (HDFS-15593) Hadoop - Upgrade to JQuery 3.5.1

2020-09-23 Thread Aryan Gupta (Jira)
Aryan Gupta created HDFS-15593:
--

 Summary: Hadoop - Upgrade to JQuery 3.5.1
 Key: HDFS-15593
 URL: https://issues.apache.org/jira/browse/HDFS-15593
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Aryan Gupta
Assignee: Aryan Gupta


jQuery version is being upgraded from jquery-3.4.1.min.js to jquery-3.5.1.min.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-09-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/

[Sep 22, 2020 2:48:18 AM] (Masatake Iwasaki) Publishing the bits for release 
2.10.1
[Sep 22, 2020 2:51:53 AM] (Masatake Iwasaki) Publishing the bits for release 
2.10.1 (addendum)
[Sep 22, 2020 6:57:36 PM] (noreply) MAPREDUCE-7294. Only application master 
should upload resource to Yarn Shared Cache. (#2319)




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.client.api.impl.TestTimelineClientV2Impl 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-compile-javac-root.txt
  [456K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [216K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [280K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/65/artifact/out/patch-unit-hadoop-hdfs-proj

Skipping this week’s APAC Hadoop storage online meetup

2020-09-23 Thread Wei-Chiu Chuang
The Chinese Hadoop Meetup will take place this Saturday. So a call is not
planned this week.

If you are interested, feee free to sign up at
https://www.slidestalk.com/m/290 (The event is in Mandarin)

Thanks Xiaoqiao for organizing the next few calls & agenda.


Hadoop Storage Online Meetup in the Wiki

2020-09-23 Thread Wei-Chiu Chuang
Hi!

We've been running this call for over a year but I just realized we never
managed to publish the information in a searchable location. So here it is
in our wiki:
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Storage+Online+Meetup

By the way, does anyone know of a good solution for storing community
material? I'm looking for a service to store the recordings of these calls.
I don't think Apache offers this kind of service other than the Apache web
host. I don't think Apache offers official Google Drive integration.
Perhaps I can create a Google Drive account that is shared among the Hadoop
PMCs.

Thoughts?

Thanks,
Wei-Chiu


[ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-23 Thread Wei-Chiu Chuang
I am pleased to announce that Lisheng Sun has accepted the invitation to
become a Hadoop committer.

Lisheng actively contributed to the project since July 2019, and he
contributed two new features: Dead datanode detector (HDFS-13571
) and a new du
implementation (HDFS-14313
) Lots of improvements
including a number of short circuit read optimization
HDFS-15161  , speeding up
NN fsimage loading time: HDFS-13694
 and HDFS-13693
. Code wise, he resolved
57 Hadoop jiras.

Let's congratulate Lisheng for this new role!

Cheers,
Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)


[ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-23 Thread Wei-Chiu Chuang
I am pleased to announce that Hui Fei has accepted the invitation to become
a Hadoop committer.

He started contributing to the project in October 2016. Over the past 4
years he has contributed a lot in HDFS, especially in Erasure Coding,
Hadoop 3 upgrade, RBF and Standby Serving reads.

One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
The adoption of Hadoop 3 has gone up after this. In the past the community
discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
one took the initiative to make it happen. I am personally very grateful
for this.

The work on EC is impressive as well. He managed to onboard EC in
production at scale, fixing tricky problems. Again, I am impressed and
grateful for the contribution in EC.

In addition to code contributions, he invested a lot in the community:

>
>- Apache Hadoop Community 2019 Beijing Meetup
>https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug 
> where
>he discussed the operational experience of RBF in production
>
>
>- Apache Hadoop Storage Community Sync Online
>
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
>  where
>he discussed the Hadoop 3 rolling upgrade support
>
>
Let's congratulate Hui for this new role!

Cheers,
Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-09-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/

[Sep 22, 2020 3:53:04 PM] (Kihwal Lee) HDFS-15581. Access Controlled HttpFS 
Server. Contributed by Richard Ross.
[Sep 22, 2020 4:10:33 PM] (noreply) HADOOP-17277. Correct spelling errors for 
separator (#2322)
[Sep 22, 2020 4:22:04 PM] (noreply) HADOOP-17261. s3a rename() needs 
s3:deleteObjectVersion permission (#2303)
[Sep 22, 2020 8:23:20 PM] (noreply) HDFS-15557. Log the reason why a storage 
log file can't be deleted (#2274)




-1 overall


The following subsystems voted -1:
asflicense pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.TestSnapshotCommands 
   hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/whitespace-tabs.txt
  [1.9M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/diff-javadoc-javadoc-root.txt
  [1.3M]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [416K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [16K]

   asflicense:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/274/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.12.0   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Ye Ni (Jira)
Ye Ni created HDFS-15594:


 Summary: Lazy calculate live datanodes in safe mode tip
 Key: HDFS-15594
 URL: https://issues.apache.org/jira/browse/HDFS-15594
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Ye Ni


Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
This could further reduce safe mode time from 1 hour to 45 minutes in 
MTPrime-CO4-3.

Old 

{{}}
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
{{}}

New 

{{}}
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The minimum number of live datanodes is not calculated since reported blocks 
hasn't reached the threshold. Safe mode will be turned off automatically once 
the thresholds have been reached.{code}
{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15595) stSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Mingliang Liu (Jira)
Mingliang Liu created HDFS-15595:


 Summary: stSnapshotCommands.testMaxSnapshotLimit fails in trunk
 Key: HDFS-15595
 URL: https://issues.apache.org/jira/browse/HDFS-15595
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, snapshots, test
Reporter: Mingliang Liu


See 
[this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
 for a sample error.

Sample error stack:
{quote}
Error Message
The real output is: createSnapshot: Failed to create snapshot: there are 
already 4 snapshot(s) and the per directory snapshot limit is 3
.
 It should contain: Failed to add snapshot: there are already 3 snapshot(s) and 
the max snapshot limit is 3
Stacktrace
java.lang.AssertionError: 
The real output is: createSnapshot: Failed to create snapshot: there are 
already 4 snapshot(s) and the per directory snapshot limit is 3
.
 It should contain: Failed to add snapshot: there are already 3 snapshot(s) and 
the max snapshot limit is 3
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
at 
org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{quote}

I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-23 Thread Xun Liu
Hui Fei, Congratulations!

On Thu, Sep 24, 2020 at 2:07 AM Wei-Chiu Chuang  wrote:

> I am pleased to announce that Hui Fei has accepted the invitation to become
> a Hadoop committer.
>
> He started contributing to the project in October 2016. Over the past 4
> years he has contributed a lot in HDFS, especially in Erasure Coding,
> Hadoop 3 upgrade, RBF and Standby Serving reads.
>
> One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
> This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
> The adoption of Hadoop 3 has gone up after this. In the past the community
> discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
> one took the initiative to make it happen. I am personally very grateful
> for this.
>
> The work on EC is impressive as well. He managed to onboard EC in
> production at scale, fixing tricky problems. Again, I am impressed and
> grateful for the contribution in EC.
>
> In addition to code contributions, he invested a lot in the community:
>
> >
> >- Apache Hadoop Community 2019 Beijing Meetup
> >
> https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug
> where
> >he discussed the operational experience of RBF in production
> >
> >
> >- Apache Hadoop Storage Community Sync Online
> >
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
> where
> >he discussed the Hadoop 3 rolling upgrade support
> >
> >
> Let's congratulate Hui for this new role!
>
> Cheers,
> Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
>


Re: [ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-23 Thread Guanghao Zhang
Congratulations, Lisheng!


Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-23 Thread Sammi Chen
Congratulations to Hui !

On Thu, Sep 24, 2020 at 2:07 AM Wei-Chiu Chuang  wrote:

> I am pleased to announce that Hui Fei has accepted the invitation to become
> a Hadoop committer.
>
> He started contributing to the project in October 2016. Over the past 4
> years he has contributed a lot in HDFS, especially in Erasure Coding,
> Hadoop 3 upgrade, RBF and Standby Serving reads.
>
> One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
> This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
> The adoption of Hadoop 3 has gone up after this. In the past the community
> discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
> one took the initiative to make it happen. I am personally very grateful
> for this.
>
> The work on EC is impressive as well. He managed to onboard EC in
> production at scale, fixing tricky problems. Again, I am impressed and
> grateful for the contribution in EC.
>
> In addition to code contributions, he invested a lot in the community:
>
> >
> >- Apache Hadoop Community 2019 Beijing Meetup
> >
> https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug
> where
> >he discussed the operational experience of RBF in production
> >
> >
> >- Apache Hadoop Storage Community Sync Online
> >
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
> where
> >he discussed the Hadoop 3 rolling upgrade support
> >
> >
> Let's congratulate Hui for this new role!
>
> Cheers,
> Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
>


Re: [DISCUSS] Ozone TLP proposal

2020-09-23 Thread Hui Fei
Hi Elek,

> 2. Following the path of Submarine, any existing Hadoop committers --
who are willing to contribute -- can ask to be included in the initial
committer list without any additional constraints. (Edit the wiki, or
send an email to this thread or to me). Thanks for Vinod to suggesting
this approach. (for Submarine at that time)

Since I'm doing some work on ozone in the near and willing to contribute,
please add my name to the wiki.

Thanks
Fei Hui

Elek, Marton  于2020年9月7日周一 下午8:04写道:

>
> Hi,
>
> The Hadoop community earlier decided to move out Ozone sub-project to a
> separated Apache Top Level Project (TLP). [1]
>
> For detailed history and motivation, please check the previous thread ([1])
>
> Ozone community discussed and agreed on the initial version of the
> project proposal, and now it's time to discuss it with the full Hadoop
> community.
>
> The current version is available at the Hadoop wiki:
>
>
> https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Hadoop+subproject+to+Apache+TLP+proposal
>
>
>   1. Please read it. You can suggest any modifications or topics to
> cover (here or in the comments)
>
>   2. Following the path of Submarine, any existing Hadoop committers --
> who are willing to contribute -- can ask to be included in the initial
> committer list without any additional constraints. (Edit the wiki, or
> send an email to this thread or to me). Thanks for Vinod to suggesting
> this approach. (for Submarine at that time)
>
>
> Next steps:
>
>   * After this discussion thread (in case of consensus) a new VOTE
> thread will be started about the proposal (*-dev@hadoop.a.o)
>
>   * In case VOTE is passed, the proposal will be sent to the Apache
> Board to be discussed.
>
>
> Please help to make the proposal better,
>
> Thanks a lot,
> Marton
>
>
> [1].
>
> https://lists.apache.org/thread.html/r298eba8abecc210abd952f040b0c4f07eccc62dcdc49429c1b8f4ba9%40%3Chdfs-dev.hadoop.apache.org%3E
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-23 Thread Uma Maheswara Rao G (Jira)
Uma Maheswara Rao G created HDFS-15596:
--

 Summary: ViewHDFS#create(f, permission, cflags, bufferSize, 
replication, blockSize, progress, checksumOpt) should not be restricted to DFS 
only.
 Key: HDFS-15596
 URL: https://issues.apache.org/jira/browse/HDFS-15596
 Project: Hadoop HDFS
  Issue Type: Sub-task
 Environment: The ViewHDFS#create(f, permission, cflags, bufferSize, 
replication, blockSize, progress, checksumOpt) API already available in 
FileSystem. It will use other overloaded API and finally can go to 
ViewFileSystem. This case works in regular ViewFileSystem also. With ViewHDFS, 
we restricted this to DFS only which cause discp to fail when target is non 
hdfs as it's using this API.
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-23 Thread Xiaoqiao He
Congrats!

Best Regards,
He Xiaoqiao

On Thu, Sep 24, 2020 at 9:08 AM Guanghao Zhang  wrote:

> Congratulations, Lisheng!
>


[jira] [Created] (HDFS-15597) ContentSummary.getSpaceConsumed does not consider replication

2020-09-23 Thread Ajmal Ahammed (Jira)
Ajmal Ahammed created HDFS-15597:


 Summary: ContentSummary.getSpaceConsumed does not consider 
replication
 Key: HDFS-15597
 URL: https://issues.apache.org/jira/browse/HDFS-15597
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfs
Affects Versions: 2.6.0
Reporter: Ajmal Ahammed


I am trying to get the disk space consumed by an HDFS directory using the 
{{ContentSummary.getSpaceConsumed}} method. I can't get the space consumption 
correctly considering the replication factor. The replication factor is 2, and 
I was expecting twice the size of the actual file size from the above method.

I can't get the space consumption correctly considering the replication factor. 
The replication factor is 2, and I was expecting twice the size of the actual 
file size from the above method.

{code}
ubuntu@ubuntu:~/ht$ sudo -u hdfs hdfs dfs -ls /var/lib/ubuntu
Found 2 items
-rw-r--r--   2 ubuntu ubuntu3145728 2020-09-08 09:55 
/var/lib/ubuntu/size-test
drwxrwxr-x   - ubuntu ubuntu  0 2020-09-07 06:37 /var/lib/ubuntu/test
{code}

But when I run the following code,
{code}
String path = "/etc/hadoop/conf/";
conf.addResource(new Path(path + "core-site.xml"));
conf.addResource(new Path(path + "hdfs-site.xml"));
long size = 
FileContext.getFileContext(conf).util().getContentSummary(fileStatus).getSpaceConsumed();
System.out.println("Replication : " + fileStatus.getReplication());
System.out.println("File size : " + size);
{code}

The output is

{code}
Replication : 0
File size : 3145728
{code}
Both the file size and the replication factor seems to be incorrect.


/etc/hadoop/conf/hdfs-site.xml contains the following config:

{code}
  
dfs.replication
2
  
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-23 Thread runlin zhang
Congratulations ! 

费总!

> 在 2020年9月24日,上午2:06,Wei-Chiu Chuang  写道:
> 
> I am pleased to announce that Hui Fei has accepted the invitation to become
> a Hadoop committer.
> 
> He started contributing to the project in October 2016. Over the past 4
> years he has contributed a lot in HDFS, especially in Erasure Coding,
> Hadoop 3 upgrade, RBF and Standby Serving reads.
> 
> One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
> This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
> The adoption of Hadoop 3 has gone up after this. In the past the community
> discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
> one took the initiative to make it happen. I am personally very grateful
> for this.
> 
> The work on EC is impressive as well. He managed to onboard EC in
> production at scale, fixing tricky problems. Again, I am impressed and
> grateful for the contribution in EC.
> 
> In addition to code contributions, he invested a lot in the community:
> 
>> 
>>   - Apache Hadoop Community 2019 Beijing Meetup
>>   https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug 
>> where
>>   he discussed the operational experience of RBF in production
>> 
>> 
>>   - Apache Hadoop Storage Community Sync Online
>>   
>> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
>>  where
>>   he discussed the Hadoop 3 rolling upgrade support
>> 
>> 
> Let's congratulate Hui for this new role!
> 
> Cheers,
> Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-23 Thread runlin zhang
Congratulations  Lisheng !

> 在 2020年9月24日,上午2:00,Wei-Chiu Chuang  写道:
> 
> I am pleased to announce that Lisheng Sun has accepted the invitation to
> become a Hadoop committer.
> 
> Lisheng actively contributed to the project since July 2019, and he
> contributed two new features: Dead datanode detector (HDFS-13571
> ) and a new du
> implementation (HDFS-14313
> ) Lots of improvements
> including a number of short circuit read optimization
> HDFS-15161  , speeding up
> NN fsimage loading time: HDFS-13694
>  and HDFS-13693
> . Code wise, he resolved
> 57 Hadoop jiras.
> 
> Let's congratulate Lisheng for this new role!
> 
> Cheers,
> Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-23 Thread Xiaoqiao He
Congrats!

Best Regards,
He Xiaoqiao

On Thu, Sep 24, 2020 at 10:03 AM Sammi Chen  wrote:

> Congratulations to Hui !
>
> On Thu, Sep 24, 2020 at 2:07 AM Wei-Chiu Chuang 
> wrote:
>
> > I am pleased to announce that Hui Fei has accepted the invitation to
> become
> > a Hadoop committer.
> >
> > He started contributing to the project in October 2016. Over the past 4
> > years he has contributed a lot in HDFS, especially in Erasure Coding,
> > Hadoop 3 upgrade, RBF and Standby Serving reads.
> >
> > One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
> > This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
> > The adoption of Hadoop 3 has gone up after this. In the past the
> community
> > discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
> > one took the initiative to make it happen. I am personally very grateful
> > for this.
> >
> > The work on EC is impressive as well. He managed to onboard EC in
> > production at scale, fixing tricky problems. Again, I am impressed and
> > grateful for the contribution in EC.
> >
> > In addition to code contributions, he invested a lot in the community:
> >
> > >
> > >- Apache Hadoop Community 2019 Beijing Meetup
> > >
> >
> https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug
> > where
> > >he discussed the operational experience of RBF in production
> > >
> > >
> > >- Apache Hadoop Storage Community Sync Online
> > >
> >
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
> > where
> > >he discussed the Hadoop 3 rolling upgrade support
> > >
> > >
> > Let's congratulate Hui for this new role!
> >
> > Cheers,
> > Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
> >
>