Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/ No changes -1 overall The following subsystems voted -1: asflicense hadolint jshint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-tools/hadoop-azure/src/config/checkstyle.xml hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml Failed junit tests : hadoop.util.TestDiskCheckerWithDiskIo hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.tools.TestDistCpSystem hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver jshint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/diff-patch-jshint.txt [208K] cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/diff-compile-javac-root.txt [456K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/diff-patch-pylint.txt [60K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/whitespace-tabs.txt [1.3M] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/xml.txt [4.0K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/diff-javadoc-javadoc-root.txt [20K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [216K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [280K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [40K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [116K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/84/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt
[jira] [Created] (HDFS-15627) Audit log deletes after edit is written
Ahmed Hussein created HDFS-15627: Summary: Audit log deletes after edit is written Key: HDFS-15627 URL: https://issues.apache.org/jira/browse/HDFS-15627 Project: Hadoop HDFS Issue Type: Bug Components: logging, namenode Reporter: Ahmed Hussein Assignee: Ahmed Hussein Deletes currently collect blocks in the write lock, write the edit, incrementally block delete, finally +audit log+. It should be collect blocks, edit log, +audit log+, incremental delete. Once the edit is durable it's consistent to audit log the delete. There is no sense in deferring the audit into the indeterminate future. The problem occurs when thereto server hung due to large deletes but it won't be easy to identify the problem. That should have been easily identified as the first delete logged after the hang. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/ [Oct 12, 2020 12:52:12 AM] (noreply) HDFS-15620. RBF: Fix test failures after HADOOP-17281 (#2375) -1 overall The following subsystems voted -1: findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.crypto.key.kms.server.TestKMS hadoop.hdfs.TestFileChecksum hadoop.hdfs.TestFileChecksumCompositeCrc hadoop.hdfs.web.TestWebHDFS hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks hadoop.hdfs.server.federation.router.TestRouterRpc hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.yarn.sls.TestSLSRunner hadoop.yarn.sls.appmaster.TestAMSimulator cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/diff-compile-cc-root.txt [48K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/diff-compile-javac-root.txt [568K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/diff-checkstyle-root.txt [16M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/diff-patch-pylint.txt [60K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/whitespace-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/whitespace-tabs.txt [1.9M] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/diff-javadoc-javadoc-root.txt [1.3M] findbugs: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2.txt [16K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [408K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [448K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [100K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/293/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [24K] Powered by Apache Yetus 0.12.0 https://yetus.apache.org
This week's Hadoop Storage online meetup (& next one after)
Hi! Here's the reminder for this week's call. We will have Leon and Ekanth from Uber talking about a new feature HDFS-15547 (Dynamic disk-level tiering) on Wednesday. Also Steve will be talking about the new IOStatistics API on Oct 28. In parallel, Xiaomi's Jinglun will talk about HDFS Federation Balancer Tools Thursday 1pm Beijing time. This talk will be in Mandarin. More details is in the wiki: https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Storage+Online+Meetup
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/27/ [Oct 12, 2020 12:52:12 AM] (noreply) HDFS-15620. RBF: Fix test failures after HADOOP-17281 (#2375) -1 overall The following subsystems voted -1: blanks findbugs mvnsite pathlen shadedclient unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml findbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] findbugs : module:hadoop-hdfs-project Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] findbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:[line 343] Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:[line 356] Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 333] findbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:[line 343] Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationS
Wire compatibility between Hadoop 3.x client and 2.x server
Hi community, Does Hadoop 3.x provide wire compatibility between 3.x clients and 2.x server? There is a blog post from Cloudera [1] mentioning wire compatibility between 2.x clients and 3.x server, but not the other direction. Curious if someone knows this. Also it'd be good to know if someone is running this setup in prod. Thanks! Chao [1]: https://blog.cloudera.com/upgrading-clusters-workloads-hadoop-2-hadoop-3
Apache Hadoop qbt Report: trunk+JDK8 on Linux/aarch64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-linux-ARM-trunk/81/ [Oct 12, 2020 12:52:12 AM] (noreply) HDFS-15620. RBF: Fix test failures after HADOOP-17281 (#2375) [Oct 12, 2020 12:39:15 PM] (Steve Loughran) HADOOP-17258. Magic S3Guard Committer to overwrite existing pendingSet file on task commit (#2371) ERROR: File 'out/email-report.txt' does not exist<> - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-15628) https throws NPE if a file is a symlink
Ahmed Hussein created HDFS-15628: Summary: https throws NPE if a file is a symlink Key: HDFS-15628 URL: https://issues.apache.org/jira/browse/HDFS-15628 Project: Hadoop HDFS Issue Type: Bug Components: fs, httpfs Reporter: Ahmed Hussein Assignee: Ahmed Hussein If a directory containing a symlink is listed, the client (WebHfdsFileSystem) blows up with a NPE. If {{type}} is {{SYMLINK}}, there must be {{symlink}} field whose value is the link target string. HttpFS returns a response without {{symlink}} filed. {{WebHfdsFileSystem}} assumes it is there for a symlink and blindly tries to parse it, causing NPE. This is not an issue if the destination cluster does not have symlinks enabled. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Wire compatibility between Hadoop 3.x client and 2.x server
In our case, when nn has upgraded to 3.1.3 and dn’s version was still 2.6, we found hive to call getContentSummary method , the client and server was not compatible because of hadoop3 added new PROVIDED storage type. 2020年10月13日 06:41,Chao Sun mailto:sunc...@apache.org>> 写道: 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! This communication is intended only for the addressee(s) and may contain information that is privileged and confidential. You are hereby notified that, if you are not an intended recipient listed above, or an authorized employee or agent of an addressee of this communication responsible for delivering e-mail messages to an intended recipient, any dissemination, distribution or reproduction of this communication (including any attachments hereto) is strictly prohibited. If you have received this communication in error, please notify us immediately by a reply e-mail addressed to the sender and permanently delete the original e-mail communication and any attachments from all storage devices without making or otherwise retaining a copy.