Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.TestDFSInotifyEventInputStream hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.fs.viewfs.TestViewFileSystemHdfs hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.yarn.sls.TestSLSRunner hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/diff-compile-javac-root.txt [488K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patch-mvnsite-root.txt [572K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patch-javadoc-root.txt [36K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [220K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.8M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [24K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1260/artifact/out/patc
Re: [DISCUSS] Release Hadoop 3.4.0
> > I think the release discussion can be in public ML? Good idea. cc common-dev/hdfs-dev/yarn-dev/mapreduce-dev ML. Best Regards, - He Xiaoqiao On Tue, Jan 2, 2024 at 6:18 AM Ayush Saxena wrote: > +1 from me as well. > > We should definitely attempt to upgrade the thirdparty version for > 3.4.0 & check if there are any pending critical/blocker issues as > well. > > I think the release discussion can be in public ML? > > -Ayush > > On Mon, 1 Jan 2024 at 18:25, Steve Loughran > wrote: > > > > +1 from me > > > > ant and maven repo to build and validate things, including making arm > > binaries if you work from an arm macbook. > > https://github.com/steveloughran/validate-hadoop-client-artifacts > > > > do we need to publish an up to date thirdparty release for this? > > > > > > > > On Mon, 25 Dec 2023 at 16:06, slfan1989 wrote: > > > > > Dear PMC Members, > > > > > > First of all, Merry Christmas to everyone! > > > > > > In our community discussions, we collectively finalized the plan to > release > > > Hadoop 3.4.0 based on the current trunk branch. I am applying to take > on > > > the responsibility for the initial release of version 3.4.0, and the > entire > > > process is set to officially commence in January 2024. > > > I have created a new JIRA: HADOOP-19018. Release 3.4.0. > > > > > > The specific work plan includes: > > > > > > 1. Following the guidance in the HowToRelease document, completing all > the > > > relevant tasks required for the release of version 3.4.0. > > > 2. Pointing the trunk branch to 3.5.0-SNAPSHOT. > > > 3. Currently, the Fix Versions of all tasks merged into trunk are set > as > > > 3.4.0; I will move them to 3.5.0. > > > > > > Confirmed features to be included in the release: > > > > > > 1. Enhanced functionality for YARN Federation. > > > 2. Optimization of HDFS RBF. > > > 3. Introduction of fine-grained global locks for DataNodes. > > > 4. Improvements in the stability of HDFS EC, and more. > > > 5. Fixes for important CVEs. > > > > > > If you have any thoughts, suggestions, or concerns, please feel free to > > > share them. > > > > > > Looking forward to a successful release! > > > > > > Best Regards, > > > Shilun Fan. > > > >
[jira] [Created] (HDFS-17320) seekToNewSource uses ignoredNodes to get a new node other than the current node.
Jian Zhang created HDFS-17320: - Summary: seekToNewSource uses ignoredNodes to get a new node other than the current node. Key: HDFS-17320 URL: https://issues.apache.org/jira/browse/HDFS-17320 Project: Hadoop HDFS Issue Type: Improvement Reporter: Jian Zhang -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17182) DataSetLockManager.lockLeakCheck() is not thread-safe.
[ https://issues.apache.org/jira/browse/HDFS-17182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He resolved HDFS-17182. Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > DataSetLockManager.lockLeakCheck() is not thread-safe. > --- > > Key: HDFS-17182 > URL: https://issues.apache.org/jira/browse/HDFS-17182 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: liuguanghua >Assignee: liuguanghua >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > threadCountMap is not thread-safe. Other functions add protected by > synchronized expect lockLeakCheck(). Add synchronized on function > lockLeakCheck(). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-17321) RBF: Add RouterAutoMsyncService for auto msync in Router
liuguanghua created HDFS-17321: -- Summary: RBF: Add RouterAutoMsyncService for auto msync in Router Key: HDFS-17321 URL: https://issues.apache.org/jira/browse/HDFS-17321 Project: Hadoop HDFS Issue Type: Improvement Reporter: liuguanghua -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan
[ https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HDFS-17310. --- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Target Version/s: 3.4.0 Resolution: Fixed > DiskBalancer: Enhance the log message for submitPlan > > > Key: HDFS-17310 > URL: https://issues.apache.org/jira/browse/HDFS-17310 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > In order to convenient troubleshoot problems, enhance the log message for > submitPlan. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/ [Jan 2, 2024, 11:17:37 PM] (github) YARN-11632. [Doc] Add allow-partial-result description to Yarn Federation documentation. (#6340) Contributed by Shilun Fan. -1 overall The following subsystems voted -1: blanks hadolint pathlen spotbugs xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Dead store to sharedDirs in org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) At NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) At NameNode.java:[line 1383] spotbugs : module:hadoop-yarn-project/hadoop-yarn org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be At TimelineConnector.java:be At TimelineConnector.java:[line 82] spotbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be At TimelineConnector.java:be At TimelineConnector.java:[line 82] spotbugs : module:hadoop-hdfs-project Dead store to sharedDirs in org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) At NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) At NameNode.java:[line 1383] spotbugs : module:hadoop-yarn-project org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be At TimelineConnector.java:be At TimelineConnector.java:[line 82] spotbugs : module:root Dead store to sharedDirs in org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) At NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) At NameNode.java:[line 1383] org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be At TimelineConnector.java:be At TimelineConnector.java:[line 82] cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/results-compile-javac-root.txt [12K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/blanks-eol.txt [15M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/results-checkstyle-root.txt [13M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/results-hadolint.txt [24K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/results-shellcheck.txt [24K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1460/artifact/out/results-javadoc-javadoc-root.txt [244K] spotbugs: https://ci-hado
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/608/ [Jan 1, 2024, 7:21:54 AM] (github) HADOOP-19020. Update the year to 2024. (#6397). Contributed by Ayush Saxena. [Jan 1, 2024, 7:04:06 PM] (github) HADOOP-18540. Upgrade Bouncy Castle to 1.70 (#5166) [Jan 1, 2024, 7:09:44 PM] (github) HADOOP-17912. ABFS: Support for Encryption Context (#6221) [Jan 2, 2024, 11:17:37 PM] (github) YARN-11632. [Doc] Add allow-partial-result description to Yarn Federation documentation. (#6340) Contributed by Shilun Fan. -1 overall The following subsystems voted -1: blanks hadolint mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] Dead store to sharedDirs in org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) At NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) At NameNode.java:[line 1383] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowMana
Re: [DISCUSS] Release Hadoop 3.4.0
+1 from me. It will include the new AWS V2 SDK upgrade as well. On Wed, Jan 3, 2024 at 6:35 AM Xiaoqiao He wrote: > > > > I think the release discussion can be in public ML? > > Good idea. cc common-dev/hdfs-dev/yarn-dev/mapreduce-dev ML. > > Best Regards, > - He Xiaoqiao > > On Tue, Jan 2, 2024 at 6:18 AM Ayush Saxena wrote: > > > +1 from me as well. > > > > We should definitely attempt to upgrade the thirdparty version for > > 3.4.0 & check if there are any pending critical/blocker issues as > > well. > > > > I think the release discussion can be in public ML? > > > > -Ayush > > > > On Mon, 1 Jan 2024 at 18:25, Steve Loughran > > > wrote: > > > > > > +1 from me > > > > > > ant and maven repo to build and validate things, including making arm > > > binaries if you work from an arm macbook. > > > https://github.com/steveloughran/validate-hadoop-client-artifacts > > > > > > do we need to publish an up to date thirdparty release for this? > > > > > > > > > > > > On Mon, 25 Dec 2023 at 16:06, slfan1989 wrote: > > > > > > > Dear PMC Members, > > > > > > > > First of all, Merry Christmas to everyone! > > > > > > > > In our community discussions, we collectively finalized the plan to > > release > > > > Hadoop 3.4.0 based on the current trunk branch. I am applying to take > > on > > > > the responsibility for the initial release of version 3.4.0, and the > > entire > > > > process is set to officially commence in January 2024. > > > > I have created a new JIRA: HADOOP-19018. Release 3.4.0. > > > > > > > > The specific work plan includes: > > > > > > > > 1. Following the guidance in the HowToRelease document, completing > all > > the > > > > relevant tasks required for the release of version 3.4.0. > > > > 2. Pointing the trunk branch to 3.5.0-SNAPSHOT. > > > > 3. Currently, the Fix Versions of all tasks merged into trunk are set > > as > > > > 3.4.0; I will move them to 3.5.0. > > > > > > > > Confirmed features to be included in the release: > > > > > > > > 1. Enhanced functionality for YARN Federation. > > > > 2. Optimization of HDFS RBF. > > > > 3. Introduction of fine-grained global locks for DataNodes. > > > > 4. Improvements in the stability of HDFS EC, and more. > > > > 5. Fixes for important CVEs. > > > > > > > > If you have any thoughts, suggestions, or concerns, please feel free > to > > > > share them. > > > > > > > > Looking forward to a successful release! > > > > > > > > Best Regards, > > > > Shilun Fan. > > > > > > >
[jira] [Created] (HDFS-17322) RetryCache#MAX_CAPACITY seems to be MIN_CAPACITY
farmmamba created HDFS-17322: Summary: RetryCache#MAX_CAPACITY seems to be MIN_CAPACITY Key: HDFS-17322 URL: https://issues.apache.org/jira/browse/HDFS-17322 Project: Hadoop HDFS Issue Type: Improvement Components: ipc Affects Versions: 3.3.6 Reporter: farmmamba Assignee: farmmamba >From the code logic, we can infer that RetryCache#MAX_CAPACITY should better >be MIN_CAPACITY. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17023) RBF: Record proxy time when call invokeConcurrent method.
[ https://issues.apache.org/jira/browse/HDFS-17023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HDFS-17023. --- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > RBF: Record proxy time when call invokeConcurrent method. > - > > Key: HDFS-17023 > URL: https://issues.apache.org/jira/browse/HDFS-17023 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.3.4 >Reporter: farmmamba >Assignee: farmmamba >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > Currently, we only invoke below code snippet in invokeConcurrent method. > {code:java} > if (rpcMonitor != null) { > rpcMonitor.proxyOp(); > } {code} > Whether should we invoke `proxyOpComplete` method like what invokeMethod does > or not? -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15807) RefreshVolume fails when replacing DISK/ARCHIVE vol on same mount
[ https://issues.apache.org/jira/browse/HDFS-15807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HDFS-15807. --- Target Version/s: (was: 3.4.0) Resolution: Done > RefreshVolume fails when replacing DISK/ARCHIVE vol on same mount > - > > Key: HDFS-15807 > URL: https://issues.apache.org/jira/browse/HDFS-15807 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Leon Gao >Assignee: Leon Gao >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > When refreshing volumes to replace DISK/ARCHIVE on the same mount, it will > fail because we have a check to see if the same vol type already exists on > the mount. > We can resolve it by removing volumes first, then add new volumes in > refreshVolume logic. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org