Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-10-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/diff-compile-javac-root.txt
  [496K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-mvnsite-root.txt
  [584K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/diff-patch-shelldocs.txt
  [48K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-javadoc-root.txt
  [32K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [232K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [428K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [40K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [128K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/444/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2

[jira] [Created] (HDFS-16264) When adding block keys, the records come from the specific Block Pool

2021-10-08 Thread JiangHua Zhu (Jira)
JiangHua Zhu created HDFS-16264:
---

 Summary: When adding block keys, the records come from the 
specific Block Pool
 Key: HDFS-16264
 URL: https://issues.apache.org/jira/browse/HDFS-16264
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: JiangHua Zhu


When adding block keys, some information will be recorded, for example:
'
2021-10-08 20:21:48,844 INFO 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting 
block keys
2021-10-08 20:21:48,844 INFO 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting 
block keys
'
Such information is a bit simple, and it is impossible to know which Block Pool 
it comes from. Especially when there are multiple Block Pools, we cannot 
distinguish them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16263) Add CMakeLists for hdfs_allowSnapshot

2021-10-08 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-16263.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Add CMakeLists for hdfs_allowSnapshot
> -
>
> Key: HDFS-16263
> URL: https://issues.apache.org/jira/browse/HDFS-16263
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently, hdfs_allowSnapshot is built in it's [parent directory's 
> CMakeLists.txt|https://github.com/apache/hadoop/blob/95b537ee6a9ff3082c9ad9bc773f86fd4be04e50/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tools/CMakeLists.txt#L83-L89].
>  Need to move this into a separate CMakeLists.txt file under 
> hdfs-allow-snapshot so that it's more modular.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16265) Refactor HDFS tool tests for better reuse

2021-10-08 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16265:
-

 Summary: Refactor HDFS tool tests for better reuse
 Key: HDFS-16265
 URL: https://issues.apache.org/jira/browse/HDFS-16265
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs++, tools
Affects Versions: 3.4.0
 Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Currently, the test cases written in hdfs-tool-test.h isn't easy to reuse. 
Primarily because the expectations are different for each HDFS tool. I realized 
this while I was creating the PR for HDFS-16260. For instance, passing more 
than one argument is erroneous to hdfs_allowSnapshot while it's the only valid 
scenario for hdfs_deleteSnapshot.

Thus, it won't be possible to reuse the test cases without decoupling the 
expectations from the test case definitions. The solution here is to move the 
expectations to the corresponding mock classes and invoke the call to set them 
up in the test cases after the creation of mock instances.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-10-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/652/

[Oct 7, 2021 2:21:05 PM] (noreply) YARN-10692. Do not extend from 
CapacitySchedulerTestBase when not needed. Contributed by Tamas Domok
[Oct 7, 2021 3:09:38 PM] (noreply) YARN-10953. Make 
CapacityScheduler#getOrCreateQueueFromPlacementConte… Contributed by Andras 
Gyori
[Oct 7, 2021 5:57:11 PM] (noreply) HDFS-16251. Make hdfs_cat tool cross 
platform (#3523)
[Oct 7, 2021 6:11:42 PM] (noreply) YARN-10934. Fix 
LeafQueue#activateApplication NPE when the user of the pending application is 
missing from usersManager. Contributed by Benjamin Teke
[Oct 8, 2021 1:34:51 AM] (noreply) HADOOP-17955. Bump netty to the latest 
4.1.68. (#3528)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org