[jira] [Created] (HADOOP-18251) Fix failure of extracting JIRA id from commit message in git_jira_fix_version_check.py
Masatake Iwasaki created HADOOP-18251: - Summary: Fix failure of extracting JIRA id from commit message in git_jira_fix_version_check.py Key: HADOOP-18251 URL: https://issues.apache.org/jira/browse/HADOOP-18251 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki git_jira_fix_version_check.py is confused by commit message like {{"YARN-1151. Ability to configure auxiliary services from HDFS-based JAR files."}} which contains both {{YARN-}} and {{{}HDFS-{}}}. The latter {{HDFS-}} is unexpectedly picked as JIRA issue id then 404 is thrown on accessing invalid URL like "https://issues.apache.org/jira/rest/api/2/issue/HDFS-";. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.TestSafeMode hadoop.hdfs.server.balancer.TestBalancer hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.tools.TestDistCpSystem hadoop.yarn.sls.TestSLSRunner hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/diff-compile-javac-root.txt [476K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-mvnsite-root.txt [556K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-javadoc-root.txt [40K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [216K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [428K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [116K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt [28K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/670/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [28K] https://ci-hadoop.
[jira] [Reopened] (HADOOP-17992) Disable JIRA plugin for YETUS on Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki reopened HADOOP-17992: --- > Disable JIRA plugin for YETUS on Hadoop > --- > > Key: HADOOP-17992 > URL: https://issues.apache.org/jira/browse/HADOOP-17992 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.10.2 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Critical > Labels: pull-request-available > Fix For: 2.10.2 > > Time Spent: 40m > Remaining Estimate: 0h > > I’ve been noticing an issue with Jenkins CI where a file jira-json goes > missing all of a sudden – jenkins / hadoop-multibranch / PR-3588 / #2 > (apache.org) > {code} > [2021-10-27T17:52:58.787Z] Processing: > https://github.com/apache/hadoop/pull/3588 > [2021-10-27T17:52:58.787Z] GITHUB PR #3588 is being downloaded from > [2021-10-27T17:52:58.787Z] > https://api.github.com/repos/apache/hadoop/pulls/3588 > [2021-10-27T17:52:58.787Z] JSON data at Wed Oct 27 17:52:55 UTC 2021 > [2021-10-27T17:52:58.787Z] Patch data at Wed Oct 27 17:52:56 UTC 2021 > [2021-10-27T17:52:58.787Z] Diff data at Wed Oct 27 17:52:56 UTC 2021 > [2021-10-27T17:52:59.814Z] awk: cannot open > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-3588/centos-7/out/jira-json > (No such file or directory) > [2021-10-27T17:52:59.814Z] ERROR: https://github.com/apache/hadoop/pull/3588 > issue status is not matched with "Patch Available". > [2021-10-27T17:52:59.814Z] > {code} > This causes the pipeline run to fail. I’ve seen this in my multiple attempts > to re-run the CI on my PR – > # After 45 minutes – [jenkins / hadoop-multibranch / PR-3588 / #1 > (apache.org)|https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3588/1/pipeline/] > # After 1 minute – [jenkins / hadoop-multibranch / PR-3588 / #2 > (apache.org)|https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3588/2/pipeline/] > # After 17 minutes – [jenkins / hadoop-multibranch / PR-3588 / #3 > (apache.org)|https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3588/3/pipeline/] > The hadoop-multibranch pipeline doesn't use ASF JIRA, thus, we're disabling > the *jira* plugin to fix this issue. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17992) Disable JIRA plugin for YETUS on Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki resolved HADOOP-17992. --- Resolution: Duplicate Since the commit log of [#3623|https://github.com/apache/hadoop/pull/3623] is mentioning HADOOP-17988 instead of HADOOP-17992, I changed the resolution to "Duplicate" and removed "Fix Version/s" in order to avoid confusing releasedocmaker. > Disable JIRA plugin for YETUS on Hadoop > --- > > Key: HADOOP-17992 > URL: https://issues.apache.org/jira/browse/HADOOP-17992 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.10.2 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Critical > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > I’ve been noticing an issue with Jenkins CI where a file jira-json goes > missing all of a sudden – jenkins / hadoop-multibranch / PR-3588 / #2 > (apache.org) > {code} > [2021-10-27T17:52:58.787Z] Processing: > https://github.com/apache/hadoop/pull/3588 > [2021-10-27T17:52:58.787Z] GITHUB PR #3588 is being downloaded from > [2021-10-27T17:52:58.787Z] > https://api.github.com/repos/apache/hadoop/pulls/3588 > [2021-10-27T17:52:58.787Z] JSON data at Wed Oct 27 17:52:55 UTC 2021 > [2021-10-27T17:52:58.787Z] Patch data at Wed Oct 27 17:52:56 UTC 2021 > [2021-10-27T17:52:58.787Z] Diff data at Wed Oct 27 17:52:56 UTC 2021 > [2021-10-27T17:52:59.814Z] awk: cannot open > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-3588/centos-7/out/jira-json > (No such file or directory) > [2021-10-27T17:52:59.814Z] ERROR: https://github.com/apache/hadoop/pull/3588 > issue status is not matched with "Patch Available". > [2021-10-27T17:52:59.814Z] > {code} > This causes the pipeline run to fail. I’ve seen this in my multiple attempts > to re-run the CI on my PR – > # After 45 minutes – [jenkins / hadoop-multibranch / PR-3588 / #1 > (apache.org)|https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3588/1/pipeline/] > # After 1 minute – [jenkins / hadoop-multibranch / PR-3588 / #2 > (apache.org)|https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3588/2/pipeline/] > # After 17 minutes – [jenkins / hadoop-multibranch / PR-3588 / #3 > (apache.org)|https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3588/3/pipeline/] > The hadoop-multibranch pipeline doesn't use ASF JIRA, thus, we're disabling > the *jira* plugin to fix this issue. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18233. - Resolution: Cannot Reproduce > Possible race condition with TemporaryAWSCredentialsProvider > > > Key: HADOOP-18233 > URL: https://issues.apache.org/jira/browse/HADOOP-18233 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs/s3 >Affects Versions: 3.3.1 > Environment: spark v3.2.0 > hadoop-aws v3.3.1 > java version 1.8.0_265 via zulu-8 >Reporter: Jason Sleight >Priority: Major > > I'm in the process of upgrading spark+hadoop versions for my workflows and > observing a weird behavior regression. I'm setting > {code:java} > spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem > spark.sql.catalogImplementation=hive > spark.hadoop.aws.region=us-west-2 > ...many other things, I think these might be the relevant ones though...{code} > in Spark config and I'm observing some non-fatal warnings/exceptions (see > below for some examples). The warnings/exceptions randomly appear for some > tasks, which causes them to fail, but then when Spark retries the task it > will succeed. The initial tasks don't always fail either, just sometimes. > I also found that if I switch to a SimpleAWSCredentials and use static keys, > then I don't see any issues. > My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have > these warnings/exceptions. > From reading some other tickets I thought perhaps adding > {code:java} > spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code} > would help, but it did not. > Appreciate any suggestions for how to proceed or debug further :) > > Example stack traces: > First one for an s3 read > {code:java} > WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor > 13): java.nio.file.AccessDeniedException: > s3a://bucket/path/to/part.snappy.parquet: > org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider > TemporaryAWSCredentialsProvider has no credentials > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053) > at > org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.columnartorow_nextBatch_0$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) > at > org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) > at org.apache.spark.scheduler.Task.run(Task.scala:131) > at > org.apache.spark.executor.Exec
[jira] [Resolved] (HADOOP-18248) Fix Junit Test Deprecated assertThat
[ https://issues.apache.org/jira/browse/HADOOP-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fanshilun resolved HADOOP-18248. Resolution: Duplicate > Fix Junit Test Deprecated assertThat > > > Key: HADOOP-18248 > URL: https://issues.apache.org/jira/browse/HADOOP-18248 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > javac will give a warning for compilation, as follows: > org.junit.Assert.assertThat Deprecated. use > org.hamcrest.MatcherAssert.assertThat() > {code:java} > TestIncrementalBrVariations.java:141:4:[deprecation] > assertThat(T,Matcher) in Assert has been deprecated {code} > a related issue will be resolved in HDFS-16590. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/307/ [May 22, 2022 1:59:38 PM] (noreply) HDFS-16584.Record StandbyNameNode information when Balancer is running. (#4333). Contributed by JiangHua Zhu. -1 overall The following subsystems voted -1: blanks pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line 1] spotbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.