Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.tools.TestDistCpSystem hadoop.yarn.sls.TestSLSRunner hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/diff-compile-javac-root.txt [488K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-mvnsite-root.txt [572K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-javadoc-root.txt [40K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [220K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [428K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [72K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [116K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/834/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt [24
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/ [Nov 2, 2022, 7:41:27 AM] (noreply) HADOOP-18484. Upgrade hsqldb to v2.7.1 to mitigate CVE-2022-41853 (#4991) -1 overall The following subsystems voted -1: blanks hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/results-compile-javac-root.txt [528K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/blanks-eol.txt [14M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/results-checkstyle-root.txt [13M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/results-hadolint.txt [8.0K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/results-shellcheck.txt [24K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/results-javadoc-javadoc-root.txt [392K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1033/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [120K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-16833) NameNode should log internal EC blocks instead of the EC block group when it receives block reports
Takanobu Asanuma created HDFS-16833: --- Summary: NameNode should log internal EC blocks instead of the EC block group when it receives block reports Key: HDFS-16833 URL: https://issues.apache.org/jira/browse/HDFS-16833 Project: Hadoop HDFS Issue Type: Task Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma When creating an EC file, NN only logs the EC block group for each of the internal EC block. {noformat} // replica file 2022-11-04 10:38:20,124 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11007 is added to blk_1073741825_1001 (size=1024) 2022-11-04 10:38:20,126 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11004 is added to blk_1073741825_1001 (size=1024) 2022-11-04 10:38:20,126 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11001 is added to blk_1073741825_1001 (size=1024) // ec file 2022-11-04 10:39:02,376 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11008 is added to blk_-9223372036854775792_1002 (size=0) 2022-11-04 10:39:02,381 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11000 is added to blk_-9223372036854775792_1002 (size=0) 2022-11-04 10:39:02,383 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11001 is added to blk_-9223372036854775792_1002 (size=0) 2022-11-04 10:39:02,385 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11007 is added to blk_-9223372036854775792_1002 (size=0) 2022-11-04 10:39:02,387 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11009 is added to blk_-9223372036854775792_1002 (size=0) 2022-11-04 10:39:02,389 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11004 is added to blk_-9223372036854775792_1002 (size=0) 2022-11-04 10:39:02,390 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11006 is added to blk_-9223372036854775792_1002 (size=0) 2022-11-04 10:39:02,393 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11002 is added to blk_-9223372036854775792_1002 (size=0) 2022-11-04 10:39:02,395 [Block report processor] INFO BlockStateChange (BlockManager.java:addStoredBlock(3633)) - BLOCK* addStoredBlock: 127.0.0.1:11003 is added to blk_-9223372036854775792_1002 (size=0) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/391/ [Nov 1, 2022, 6:41:41 PM] (noreply) HADOOP-18512: upgrade woodstox-core to 5.4.0 for security fix (#5087). Contributed by PJ Fanning. [Nov 1, 2022, 8:34:59 PM] (noreply) document fix for MAPREDUCE-7425 (#5090) [Nov 1, 2022, 9:02:06 PM] (noreply) YARN-11363. Remove unused TimelineVersionWatcher and TimelineVersion from hadoop-yarn-server-tests (#5091) [Nov 1, 2022, 9:44:35 PM] (noreply) YARN-11364. Docker Container to accept docker Image name with sha256 digest (#5092) [Nov 2, 2022, 7:41:27 AM] (noreply) HADOOP-18484. Upgrade hsqldb to v2.7.1 to mitigate CVE-2022-41853 (#4991) -1 overall The following subsystems voted -1: blanks hadolint pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line 1] spotbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known t
[jira] [Resolved] (HDFS-16810) Support to make dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock reconfigurable
[ https://issues.apache.org/jira/browse/HDFS-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiyang Hu resolved HDFS-16810. --- Resolution: Done > Support to make > dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock > reconfigurable > - > > Key: HDFS-16810 > URL: https://issues.apache.org/jira/browse/HDFS-16810 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > > When the Backoff monitor is enabled, the parameter > dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock can be > dynamically adjusted to determines release the namenode write lock after the > numbe of blocks to process when moving blocks to pendingReplication . -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-16810) Support to make dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock reconfigurable
[ https://issues.apache.org/jira/browse/HDFS-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiyang Hu reopened HDFS-16810: --- > Support to make > dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock > reconfigurable > - > > Key: HDFS-16810 > URL: https://issues.apache.org/jira/browse/HDFS-16810 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > > When the Backoff monitor is enabled, the parameter > dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock can be > dynamically adjusted to determines release the namenode write lock after the > numbe of blocks to process when moving blocks to pendingReplication . -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16810) Support to make dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock reconfigurable
[ https://issues.apache.org/jira/browse/HDFS-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiyang Hu resolved HDFS-16810. --- Resolution: Won't Do > Support to make > dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock > reconfigurable > - > > Key: HDFS-16810 > URL: https://issues.apache.org/jira/browse/HDFS-16810 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > > When the Backoff monitor is enabled, the parameter > dfs.namenode.decommission.backoff.monitor.pending.blocks.per.lock can be > dynamically adjusted to determines release the namenode write lock after the > numbe of blocks to process when moving blocks to pendingReplication . -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org