[jira] [Created] (HDFS-16135) Improve the display of the color of the storage space used by the DataNode
JiangHua Zhu created HDFS-16135: --- Summary: Improve the display of the color of the storage space used by the DataNode Key: HDFS-16135 URL: https://issues.apache.org/jira/browse/HDFS-16135 Project: Hadoop HDFS Issue Type: Improvement Reporter: JiangHua Zhu Attachments: image-2021-07-22-15-31-55-467.png In the cluster, we found that the colors corresponding to the storage space usage displayed by some DataNodes are inconsistent with the actual storage space used by the cluster. !image-2021-07-22-15-31-55-467.png! We should set the appropriate color display. In our case, it happens in the federated cluster mode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/ [Jul 21, 2021 1:26:24 AM] (Konstantin Shvachko) HADOOP-17028. ViewFS should initialize mounted target filesystems lazily. Contributed by Abhishek Das (#2260, #3218) [Jul 21, 2021 7:35:45 AM] (821684824) YARN-10860. Make max container per heartbeat configs refreshable. Contributed by Eric Badger. -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.yarn.sls.TestSLSRunner hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/diff-compile-javac-root.txt [496K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-mvnsite-root.txt [608K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/diff-patch-pylint.txt [48K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/diff-patch-shelldocs.txt [48K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-javadoc-root.txt [64K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [232K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [424K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [40K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [96K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/367/artifact/out/patch-unit-
[jira] [Created] (HDFS-16136) Handle all occurrence of InvalidEncryptionKeyException
Wei-Chiu Chuang created HDFS-16136: -- Summary: Handle all occurrence of InvalidEncryptionKeyException Key: HDFS-16136 URL: https://issues.apache.org/jira/browse/HDFS-16136 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.1.1 Reporter: Wei-Chiu Chuang After HDFS-10609, HDFS-11741, we still observe InvalidEncryptionKeyException errors that are not retried. {noformat} 2021-07-12 11:10:58,795 ERROR datanode.DataNode (DataXceiver.java:writeBlock(863)) - DataNode{data=FSDataset{dirpath='[/grid/01/hadoop/hdfs/data, /grid/02/hadoop/hdfs/data, /grid/03/hadoop/hdfs/data, /grid/04/hadoop/hdfs/data, /grid/05/hadoop/hdfs/data, /grid/06/hadoop/hdfs/data, /grid/07/hadoop/hdfs/data, /grid/08/hadoop/hdfs/data, /grid/09/hadoop/hdfs/data, /grid/10/hadoop/hdfs/data, /grid/11/hadoop/hdfs/data, /grid/12/hadoop/hdfs/data, /grid/13/hadoop/hdfs/data, /grid/14/hadoop/hdfs/data, /grid/15/hadoop/hdfs/data, /grid/16/hadoop/hdfs/data, /grid/17/hadoop/hdfs/data, /grid/18/hadoop/hdfs/data, /grid/19/hadoop/hdfs/data, /grid/20/hadoop/hdfs/data, /grid/21/hadoop/hdfs/data, /grid/22/hadoop/hdfs/data]'}, localName='lxdmelcly-lxw01-p01-whw10289.oan:10019', datanodeUuid='70403b64-cb39-4b4a-ac6c-787ce7bdbe2c', xmitsInProgress=0}:Exception transfering block BP-1743446178-172.18.16.38-1537373339905:blk_2196991498_1131235321 to mirror 172.18.16.33:10019 org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: Can't re-compute encryption key for nonce, since the required block key (keyID=-213389155) doesn't exist. Current key: 1804780309 at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:419) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:479) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:303) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:245) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:215) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:800) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:745) 2021-07-12 11:10:58,796 ERROR datanode.DataNode (DataXceiver.java:run(321)) - xxx:10019:DataXceiver error processing WRITE_BLOCK operation src: /172.18.16.8:41992 dst: /172.18.16.20:10019 org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: Can't re-compute encryption key for nonce, since the required block key (keyID=-213389155) doesn't exist. Current key: 1804780309 {noformat} We should handle this exception whenever SaslDataTransferClient.socketSend() is invoked: DataXceiver.writeBlock() BlockDispatcher.moveBlock() DataNode.run() DataXceiver.replaceBlock() StripedBlockWriter.init() This issue isn't that obvious, because the existing HDFS fault tolerance mechanisms should mask the data encryption key error. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/ [Jul 21, 2021 2:06:46 AM] (noreply) YARN-10630. [UI2] Ambiguous queue name resolution (#3214) [Jul 21, 2021 7:31:44 AM] (821684824) YARN-10860. Make max container per heartbeat configs refreshable. Contributed by Eric Badger. -1 overall The following subsystems voted -1: blanks pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-tools/hadoop-azure Inconsistent synchronization of org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.in; locked 81% of time Unsynchronized access at NativeAzureFileSystem.java:81% of time Unsynchronized access at NativeAzureFileSystem.java:[line 938] spotbugs : module:hadoop-tools Inconsistent synchronization of org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.in; locked 81% of time Unsynchronized access at NativeAzureFileSystem.java:81% of time Unsynchronized access at NativeAzureFileSystem.java:[line 938] spotbugs : module:root Inconsistent synchronization of org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.in; locked 81% of time Unsynchronized access at NativeAzureFileSystem.java:81% of time Unsynchronized access at NativeAzureFileSystem.java:[line 938] Failed junit tests : hadoop.hdfs.server.namenode.TestDecommissioningStatus hadoop.hdfs.TestViewDistributedFileSystemContract hadoop.hdfs.TestSnapshotCommands hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor hadoop.hdfs.server.namenode.ha.TestEditLogTailer hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand hadoop.hdfs.TestHDFSFileSystemContract hadoop.hdfs.web.TestWebHdfsFileSystemContract hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor hadoop.tools.dynamometer.TestDynamometerInfra hadoop.tools.dynamometer.TestDynamometerInfra cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/results-compile-javac-root.txt [364K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/blanks-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/results-checkstyle-root.txt [16M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/results-shellcheck.txt [28K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/results-javadoc-javadoc-root.txt [408K] spotbugs: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artifact/out/branch-spotbugs-hadoop-tools_hadoop-azure-warnings.html [8.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/576/artif