Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/ No changes -1 overall The following subsystems voted -1: asflicense hadolint jshint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-tools/hadoop-azure/src/config/checkstyle.xml hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml Failed junit tests : hadoop.util.TestDiskCheckerWithDiskIo hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator hadoop.tools.TestDistCpSystem hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver jshint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/diff-patch-jshint.txt [208K] cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/diff-compile-javac-root.txt [456K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/diff-patch-pylint.txt [60K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/whitespace-tabs.txt [1.3M] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/xml.txt [4.0K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/diff-javadoc-javadoc-root.txt [20K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [216K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [272K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [120K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/112/artifact/out/patch-unit-hadoop-y
Committers please subscribe to security@hadoop
The security@hadoop mailing list is restricted to Hadoop committers. Subscribing to the mailing list is not an automatic process. If you are a committer, please subscribe by sending an email to security-subscr...@hadoop.apache.org so we can discuss vulnerabilities in private. Thanks, Wei-Chiu
No Hadoop Storage Online Meetup this week.
Wednesday is Veterans Day in the US when many businesses will take the day off, so I'm not arranging a call. But feel free to suggest a topic for the next one! Wei-Chiu
[jira] [Resolved] (HDFS-15485) Fix outdated properties of JournalNode when performing rollback
[ https://issues.apache.org/jira/browse/HDFS-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-15485. Fix Version/s: 3.2.3 3.1.5 3.3.1 Resolution: Fixed Cherrypicked the commit into branch-3.3 ~ branch-3.1. Thanks [~Deegue]! > Fix outdated properties of JournalNode when performing rollback > --- > > Key: HDFS-15485 > URL: https://issues.apache.org/jira/browse/HDFS-15485 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Deegue >Assignee: Deegue >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Time Spent: 0.5h > Remaining Estimate: 0h > > When rollback HDFS cluster, properties in JNStorage won't be refreshed after > the storage dir changed. It leads to exceptions when starting namenode. > The exception like: > {code:java} > 2020-07-09 19:04:12,810 FATAL [IPC Server handler 105 on 8022] > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: > recoverUnfinalizedSegments failed for required journal > (JournalAndStream(mgr=QJM to [10.0.118.217:8485, 10.0.117.208:8485, > 10.0.118.179:8485], stream=null)) > org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many > exceptions to achieve quorum size 2/3. 3 exceptions thrown: > 10.0.118.217:8485: Incompatible namespaceID for journal Storage Directory > /mnt/vdc-11176G-0/dfs/jn/nameservicetest1: NameNode has nsId 647617129 but > storage has nsId 0 > at > org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:236) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:300) > at > org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:136) > at > org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133) > at > org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/ [Nov 9, 2020 8:05:08 PM] (noreply) HADOOP-17360. Log the remote address for authentication success (#2441) [Nov 9, 2020 11:06:16 PM] (noreply) HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount (#2288). Contributed by Leon Gao. -1 overall The following subsystems voted -1: pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.hdfs.TestGetFileChecksum hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem hadoop.hdfs.server.federation.router.TestRouterRpc hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.tools.dynamometer.TestDynamometerInfra hadoop.tools.dynamometer.TestDynamometerInfra hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked hadoop.fs.azure.TestNativeAzureFileSystemMocked hadoop.fs.azure.TestBlobMetadata hadoop.fs.azure.TestNativeAzureFileSystemConcurrency hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck hadoop.fs.azure.TestNativeAzureFileSystemContractMocked hadoop.fs.azure.TestWasbFsck hadoop.fs.azure.TestOutOfBandAzureBlobOperations cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/diff-compile-cc-root.txt [48K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/diff-compile-javac-root.txt [568K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/diff-checkstyle-root.txt [16M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/diff-patch-pylint.txt [60K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/whitespace-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/whitespace-tabs.txt [2.0M] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/diff-javadoc-javadoc-root.txt [2.0M] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [344K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [156K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [452K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt [8.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/321/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer.txt [24K] https://ci-hadoop.apache.org/job/hadoop-q
[jira] [Created] (HDFS-15678) TestDFSOutputStream#testCloseTwice implementation is broken
Ahmed Hussein created HDFS-15678: Summary: TestDFSOutputStream#testCloseTwice implementation is broken Key: HDFS-15678 URL: https://issues.apache.org/jira/browse/HDFS-15678 Project: Hadoop HDFS Issue Type: Bug Components: dfs Reporter: Ahmed Hussein Assignee: Ahmed Hussein [~daryn] noticed that the test {{TestDFSOutputStream#testCloseTwice}} was "cheating" by relying on incorrect behavior to pass. It closes a stream, injects an exception into the stream, closes and verifies injected exception is thrown, closes again and verifies no exception. The problem is a closed stream must never throw again so the injection should be a no-op. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-15679) DFSOutputStream close should be a no-op when called multiple times
Ahmed Hussein created HDFS-15679: Summary: DFSOutputStream close should be a no-op when called multiple times Key: HDFS-15679 URL: https://issues.apache.org/jira/browse/HDFS-15679 Project: Hadoop HDFS Issue Type: Bug Reporter: Ahmed Hussein Assignee: Xiao Chen While I was looking into the incorrect implementation of HDFS-15678, I found that once I implement the correct logic, the Junit test fails. It turns out that there is inconsistency in {{DFSOutputStream.closeImpl()}} introduced by HDFS-13164. The change in [that line|https://github.com/apache/hadoop/commit/51088d323359587dca7831f74c9d065c2fccc60d#diff-3a80b95578dc5079cebf0441e1dab63d5844c02fa2d04071c165ec4f7029f918R860] makes the close() throws exception multiple times which contradicts with HDFS-5335. Also, I believe the implementation is incorrect and it needs to be reviewed. For example, the implementation first checks {{isClosed()}} , then inside the {{finally block}} (which is inside {{isClosed() block}}) , there is a second check for {{!closed}}. *A DFSOutputStream can never opened after being closed.* {code:java} if (isClosed()) { LOG.debug("Closing an already closed stream. [Stream:{}, streamer:{}]", closed, getStreamer().streamerClosed()); try { getStreamer().getLastException().check(true); } catch (IOException ioe) { cleanupAndRethrowIOException(ioe); } finally { if (!closed) { // If stream is not closed but streamer closed, clean up the stream. // Most importantly, end the file lease. closeThreads(true); } } {code} [~xiaochen] and [~yzhangal] can you please take another look at that patch? That change breaks the semantic of {{close()}}. For convenience, this is a test code that fails because of the change in HDFS-13164. {code:java} public void testCloseTwice() throws IOException { DistributedFileSystem fs = cluster.getFileSystem(); FSDataOutputStream os = fs.create(new Path("/test")); DFSOutputStream dos = (DFSOutputStream) Whitebox.getInternalState(os, "wrappedStream"); DataStreamer streamer = (DataStreamer) Whitebox .getInternalState(dos, "streamer"); @SuppressWarnings("unchecked") LastExceptionInStreamer ex = (LastExceptionInStreamer) Whitebox .getInternalState(streamer, "lastException"); Throwable thrown = (Throwable) Whitebox.getInternalState(ex, "thrown"); Assert.assertNull(thrown); // force stream to break. output stream needs to encounter a real // error to properly mark it closed with an exception cluster.shutdown(true, false); try { dos.close(); Assert.fail("should have thrown"); } catch (IOException e) { Assert.assertEquals(e.toString(), EOFException.class, e.getClass()); } thrown = (Throwable) Whitebox.getInternalState(ex, "thrown"); Assert.assertNull(thrown); dos.close(); // even if the exception is set again, close should not throw it ex.set(new IOException("dummy")); dos.close(); } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.2.2 - RC2
I compiled a native build from source and started a non-HA, non-HA secure (TLS and Kerberos) and a HA hdfs cluster using docker. On each, I created a few files, looked around the webUI etc. Everything looks fine. The only strange thing, was on the Namenode webUI under the overview section - the "unknown" is a little strange, but I am not sure what is usually in there: Version: 3.2.2, rUnknown Compiled: Mon Nov 09 17:30:00 + 2020 by root from Unknown +1 from me. Stephen. On Sat, Nov 7, 2020 at 2:52 PM Xiaoqiao He wrote: > Hi folks, > > The release candidate (RC2) for Hadoop-3.2.2 is available now. > There are two commits[1] differences between RC2 and RC1[2](Thanks Akira > Ajisaka for the report.): > * revert HADOOP-17306. > * include HDFS-15643. > > The RC2 is available at: > http://people.apache.org/~hexiaoqiao/hadoop-3.2.2-RC2 > The RC2 tag in github is here: > https://github.com/apache/hadoop/tree/release-3.2.2-RC2 > The maven artifacts are staged at: > https://repository.apache.org/content/repositories/orgapachehadoop-1288 > > You can find my public key at: > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS or > https://people.apache.org/keys/committer/hexiaoqiao.asc directly. > > Please try the release and vote. The vote will close until 2020/11/14 at > 00:00 CST. > > Thanks, > He Xiaoqiao > > [1] > > https://github.com/apache/hadoop/compare/release-3.2.2-RC1...release-3.2.2-RC2 > [2] > > https://lists.apache.org/thread.html/rc7247434f5a77b6d0d1d1f3fcd6b6668eb431a5697e582a6338f0eb7%40%3Chdfs-dev.hadoop.apache.org%3E > [3] > https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12335948 >
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/54/ [Nov 9, 2020 8:05:08 PM] (noreply) HADOOP-17360. Log the remote address for authentication success (#2441) [Nov 9, 2020 11:06:16 PM] (noreply) HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount (#2288). Contributed by Leon Gao. [Nov 10, 2020 4:58:27 AM] (noreply) HADOOP-17352. Update PATCH_NAMING_RULE in the personality file. (#2433) [Nov 10, 2020 5:01:10 AM] (noreply) HDFS-15667. Audit log record the unexpected allowed result when delete (#2437) [Nov 10, 2020 5:09:03 AM] (Hui Fei) HDFS-15668. RBF: Fix RouterRPCMetrics annocation and document misplaced error. Contributed by Hongbing Wang. -1 overall The following subsystems voted -1: blanks findbugs mvnsite pathlen shadedclient unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml findbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 694] findbugs : module:hadoop-hdfs-project Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 694] findbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:[line 343] Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:[line 356] Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 333] findbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$Local
[jira] [Created] (HDFS-15680) Disable Broken Azure Junits
Ahmed Hussein created HDFS-15680: Summary: Disable Broken Azure Junits Key: HDFS-15680 URL: https://issues.apache.org/jira/browse/HDFS-15680 Project: Hadoop HDFS Issue Type: Sub-task Components: fs/azure Reporter: Ahmed Hussein Assignee: Ahmed Hussein There are 6 test classes have been failing on Yetus for several months. They contributed to more than 41 failing tests which makes reviewing Yetus reports every a pain in the neck. Another point is to save the resources and avoiding utilization of ports, memory, and CPU. Over the last month, there was some effort to bring the Yetus back to a stable state. However, there is no progress in addressing Azure failures. Generally, I do not like to disable failing tests, but for this specific case, I do not assume that it makes any sense to have 41 failing tests from one module for several months. Whenever someone finds that those tests are useful, then they can re-enable the tests on Yetus *_After_* the test is fixed. Following a PR, I have to spend a considerate time in reviewing that my patch does not cause any failures. A thorough review takes a considerable time browsing the nightly builds and PR report. So, please consider how much time is being spent to review those stack trace over the last months. Finally, this is one of the reasons developers tend to ignore the reports, because it would take too much time to review; and default, the errors are considered irrelevant. CC: [~aajisaka], [~elgoiri], [~weichiu], [~ayushtkn] {code:bash} hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked hadoop.fs.azure.TestNativeAzureFileSystemMocked hadoop.fs.azure.TestBlobMetadata hadoop.fs.azure.TestNativeAzureFileSystemConcurrency hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck hadoop.fs.azure.TestNativeAzureFileSystemContractMocked hadoop.fs.azure.TestWasbFsck hadoop.fs.azure.TestOutOfBandAzureBlobOperations {code} {code:bash} org.apache.hadoop.fs.azure.TestBlobMetadata.testFolderMetadata org.apache.hadoop.fs.azure.TestBlobMetadata.testFirstContainerVersionMetadata org.apache.hadoop.fs.azure.TestBlobMetadata.testPermissionMetadata org.apache.hadoop.fs.azure.TestBlobMetadata.testOldPermissionMetadata org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testNoTempBlobsVisible org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testLinkBlobs org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testListStatusRootDir org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameDirectoryMoveToExistingDirectory org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testListStatus org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameDirectoryAsExistingDirectory org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameToDirWithSamePrefixAllowed org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testLSRootDir org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testDeleteRecursively org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck.testWasbFsck org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testChineseCharactersFolderRename org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderInFolderListingWithZeroByteRenameMetadata org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderInFolderListing org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testUriEncoding org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testDeepFileCreation org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testListDirectory org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderRenameInProgress org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRenameFolder org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRenameImplicitFolder org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolder org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testStoreDeleteFolder org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRename org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked.testListStatus org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked.testRenameDirectoryAsEmptyDirectory org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked.testListStatusFilterWithSomeMatches org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked.testRenameDirectoryAsNonExistentDirectory org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked.testGlobStatusSomeMatchesInDirectories org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked.testGlobStatusWithMultipleWildCardMatches org.apache.hadoop.fs.azure.TestN
Mandarin Hadoop Storage Online Meetup this week (APAC Mandarin)
Hi All, We are happy to invite Hu Haiyang(胡海洋) leading discussion on HDFS Erasure Coding Practice at DiDi at this week's Hadoop storage online meetup (APAC Mandarin). Time/Date: 11/11 10PM (US West Coast PST) 11/12 1PM (Beijing, China CST) Join Zoom Meeting https://cloudera.zoom.us/j/880548968 - He Xiaoqiao