Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion hadoop.hdfs.TestFileLengthOnClusterRestart hadoop.hdfs.server.namenode.ha.TestPipelinesFailover hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.fs.viewfs.TestViewFileSystemHdfs hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.yarn.sls.TestSLSRunner hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-compile-javac-root.txt [488K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-mvnsite-root.txt [572K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-javadoc-root.txt [36K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [220K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.8M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.t
[jira] [Created] (HDFS-17337) RPC RESPONSE time seems not exactly accurate when using FSEditLogAsync.
farmmamba created HDFS-17337: Summary: RPC RESPONSE time seems not exactly accurate when using FSEditLogAsync. Key: HDFS-17337 URL: https://issues.apache.org/jira/browse/HDFS-17337 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.3.6 Reporter: farmmamba Assignee: farmmamba -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-17338) Improve the doc related to permissions.
huangzhaobo created HDFS-17338: -- Summary: Improve the doc related to permissions. Key: HDFS-17338 URL: https://issues.apache.org/jira/browse/HDFS-17338 Project: Hadoop HDFS Issue Type: Improvement Reporter: huangzhaobo -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17312) packetsReceived metric should ignore heartbeat packet
[ https://issues.apache.org/jira/browse/HDFS-17312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma resolved HDFS-17312. - Fix Version/s: 3.5.0 Resolution: Fixed > packetsReceived metric should ignore heartbeat packet > - > > Key: HDFS-17312 > URL: https://issues.apache.org/jira/browse/HDFS-17312 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.3.6 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > > Metric packetsReceived should ignore heartbeat packet and only used to count > data packets and last packet in block. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.4.0 RC0
wonderful! I'll be testing over the weekend Meanwhile, new changes I'm putting in to trunk are tagged as fixed in 3.5.0 -correct? steve On Thu, 11 Jan 2024 at 05:15, slfan1989 wrote: > Hello all, > > We plan to release hadoop 3.4.0 based on hadoop trunk, which is the first > hadoop 3.4.0-RC version. > > The RC is available at: > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/ (for amd64) > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-arm64/ (for arm64) > > Maven artifacts is built by x86 machine and are staged at > https://repository.apache.org/content/repositories/orgapachehadoop-1391/ > > My public key: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > Changelog: > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/CHANGELOG.md > > Release notes: > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/RELEASENOTES.md > > This is a relatively big release (by Hadoop standard) containing about 2852 > commits. > > Please give it a try, this RC vote will run for 7 days. > > Feature highlights: > > DataNode FsDatasetImpl Fine-Grained Locking via BlockPool > > [HDFS-15180](https://issues.apache.org/jira/browse/HDFS-15180) Split > FsDatasetImpl datasetLock via blockpool to solve the issue of heavy > FsDatasetImpl datasetLock > When there are many namespaces in a large cluster. > > YARN Federation improvements > > [YARN-5597](https://issues.apache.org/jira/browse/YARN-5597) brings many > improvements, including the following: > > 1. YARN Router now boasts a full implementation of all relevant interfaces > including the ApplicationClientProtocol, > ResourceManagerAdministrationProtocol, and RMWebServiceProtocol. > 2. Enhanced support for Application cleanup and automatic offline > mechanisms for SubCluster are now facilitated by the YARN Router. > 3. Code optimization for Router and AMRMProxy was undertaken, coupled with > improvements to previously pending functionalities. > 4. Audit logs and Metrics for Router received upgrades. > 5. A boost in cluster security features was achieved, with the inclusion of > Kerberos support. > 6. The page function of the router has been enhanced. > > Upgrade AWS SDK to V2 > > [HADOOP-18073](https://issues.apache.org/jira/browse/HADOOP-18073) > The S3A connector now uses the V2 AWS SDK. This is a significant change at > the source code level. > Any applications using the internal extension/override points in the > filesystem connector are likely to break. > Consult the document aws\_sdk\_upgrade for the full details. > > hadoop-thirdparty will also provide the new RC0 soon. > > Best Regards, > Shilun Fan. >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/ No changes -1 overall The following subsystems voted -1: blanks hadolint pathlen spotbugs xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-yarn-project/hadoop-yarn org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be At TimelineConnector.java:be At TimelineConnector.java:[line 82] spotbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be At TimelineConnector.java:be At TimelineConnector.java:[line 82] spotbugs : module:hadoop-yarn-project org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be At TimelineConnector.java:be At TimelineConnector.java:[line 82] spotbugs : module:root org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be At TimelineConnector.java:be At TimelineConnector.java:[line 82] cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-compile-javac-root.txt [12K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/blanks-eol.txt [15M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-checkstyle-root.txt [13M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-hadolint.txt [24K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-shellcheck.txt [24K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-javadoc-javadoc-root.txt [244K] spotbugs: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html [8.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/branch-spotbugs-hadoop-yarn-project-warnings.html [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/branch-spotbugs-root-warnings.html [20K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/612/ No changes -1 overall The following subsystems voted -1: blanks hadolint mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line 1] spotbugs : module:hadoop-yarn-project/hadoop-yarn org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be At TimelineConnector.java:be At TimelineConnector.java:[line 82] Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.
Re: [VOTE] Release Apache Hadoop 3.4.0 RC0
Thank you very much for your help in verifying this version! We will use version 3.5.0 for fix jira in the future. Best Regards, Shilun Fan. > wonderful! I'll be testing over the weekend > Meanwhile, new changes I'm putting in to trunk are tagged as fixed in 3.5.0 > -correct? > steve > On Thu, 11 Jan 2024 at 05:15, slfan1989 wrote: > Hello all, > > We plan to release hadoop 3.4.0 based on hadoop trunk, which is the first > hadoop 3.4.0-RC version. > > The RC is available at: > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/ (for amd64) > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-arm64/ (for arm64) > > Maven artifacts is built by x86 machine and are staged at > https://repository.apache.org/content/repositories/orgapachehadoop-1391/ > > My public key: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > Changelog: > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/CHANGELOG.md > > Release notes: > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/RELEASENOTES.md > > This is a relatively big release (by Hadoop standard) containing about 2852 > commits. > > Please give it a try, this RC vote will run for 7 days. > > Feature highlights: > > DataNode FsDatasetImpl Fine-Grained Locking via BlockPool > > [HDFS-15180](https://issues.apache.org/jira/browse/HDFS-15180) Split > FsDatasetImpl datasetLock via blockpool to solve the issue of heavy > FsDatasetImpl datasetLock > When there are many namespaces in a large cluster. > > YARN Federation improvements > > [YARN-5597](https://issues.apache.org/jira/browse/YARN-5597) brings many > improvements, including the following: > > 1. YARN Router now boasts a full implementation of all relevant interfaces > including the ApplicationClientProtocol, > ResourceManagerAdministrationProtocol, and RMWebServiceProtocol. > 2. Enhanced support for Application cleanup and automatic offline > mechanisms for SubCluster are now facilitated by the YARN Router. > 3. Code optimization for Router and AMRMProxy was undertaken, coupled with > improvements to previously pending functionalities. > 4. Audit logs and Metrics for Router received upgrades. > 5. A boost in cluster security features was achieved, with the inclusion of > Kerberos support. > 6. The page function of the router has been enhanced. > > Upgrade AWS SDK to V2 > > [HADOOP-18073](https://issues.apache.org/jira/browse/HADOOP-18073) > The S3A connector now uses the V2 AWS SDK. This is a significant change at > the source code level. > Any applications using the internal extension/override points in the > filesystem connector are likely to break. > Consult the document aws\_sdk\_upgrade for the full details. > > hadoop-thirdparty will also provide the new RC0 soon. > > Best Regards, > Shilun Fan. >
Fwd: Fw:Re: [VOTE] Release Apache Hadoop 3.4.0 RC0
Thank you very much for your suggestions! I will continue to improve the RC0 version. Best Regards, Shilun Fan. Original From:"Masatake Iwasaki"< iwasak...@oss.nttdata.com >; Date:2024/1/11 13:45 To:"common-dev"< common-...@hadoop.apache.org >;"hdfs-dev"< hdfs-dev@hadoop.apache.org >;"yarn-dev"< yarn-...@hadoop.apache.org >; "mapreduce-dev"< mapreduce-...@hadoop.apache.org >; CC:"private"< priv...@hadoop.apache.org >; Subject:Re: [VOTE] Release Apache Hadoop 3.4.0 RC0 Thanks for driving this release, Shilun Fan. The top page of site documentation (in hadoop-3.4.0-RC0-site.tar.gz) looks the same as 3.3.5. While the index.md.vm is updated in branch-3.4.0[1], it seems not to be reflected. release-3.4.0-RC0 tag should be pushed to make checking easier. In addition, the description about new features of previous release should be removed from the index.md.vm. [1] https://github.com/apache/hadoop/blob/branch-3.4.0/hadoop-project/src/site/markdown/index.md.vm Masatake Iwasaki On 2024/01/11 14:15, slfan1989 wrote: > Hello all, > > We plan to release hadoop 3.4.0 based on hadoop trunk, which is the first > hadoop 3.4.0-RC version. > > The RC is available at: > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/ (for amd64) > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-arm64/ (for arm64) > > Maven artifacts is built by x86 machine and are staged at > https://repository.apache.org/content/repositories/orgapachehadoop-1391/ > > My public key: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > Changelog: > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/CHANGELOG.md > > Release notes: > https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/RELEASENOTES.md > > This is a relatively big release (by Hadoop standard) containing about 2852 > commits. > > Please give it a try, this RC vote will run for 7 days. > > Feature highlights: > > DataNode FsDatasetImpl Fine-Grained Locking via BlockPool > > [HDFS-15180](https://issues.apache.org/jira/browse/HDFS-15180) Split > FsDatasetImpl datasetLock via blockpool to solve the issue of heavy > FsDatasetImpl datasetLock > When there are many namespaces in a large cluster. > > YARN Federation improvements > > [YARN-5597](https://issues.apache.org/jira/browse/YARN-5597) brings many > improvements, including the following: > > 1. YARN Router now boasts a full implementation of all relevant interfaces > including the ApplicationClientProtocol, > ResourceManagerAdministrationProtocol, and RMWebServiceProtocol. > 2. Enhanced support for Application cleanup and automatic offline > mechanisms for SubCluster are now facilitated by the YARN Router. > 3. Code optimization for Router and AMRMProxy was undertaken, coupled with > improvements to previously pending functionalities. > 4. Audit logs and Metrics for Router received upgrades. > 5. A boost in cluster security features was achieved, with the inclusion of > Kerberos support. > 6. The page function of the router has been enhanced. > > Upgrade AWS SDK to V2 > > [HADOOP-18073](https://issues.apache.org/jira/browse/HADOOP-18073) > The S3A connector now uses the V2 AWS SDK. This is a significant change at > the source code level. > Any applications using the internal extension/override points in the > filesystem connector are likely to break. > Consult the document aws\_sdk\_upgrade for the full details. > > hadoop-thirdparty will also provide the new RC0 soon. > > Best Regards, > Shilun Fan. > - To unsubscribe, e-mail: private-unsubscr...@hadoop.apache.org For additional commands, e-mail: private-h...@hadoop.apache.org