Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1455/ [Mar 30, 2020 7:29:21 PM] (inigoiri) HDFS-15196. RBF: RouterRpcServer getListing cannot list large dirs -1 overall The following subsystems voted -1: asflicense compile findbugs mvninstall mvnsite pathlen shadedclient unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.metrics2.source.TestJvmMetrics hadoop.security.token.delegation.TestZKDelegationTokenSecretManager hadoop.hdfs.TestAclsEndToEnd hadoop.hdfs.TestRead hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.hdfs.server.namenode.TestProcessCorruptBlocks hadoop.hdfs.server.mover.TestStorageMover hadoop.hdfs.server.namenode.TestRefreshBlockPlacementPolicy hadoop.hdfs.server.blockmanagement.TestPendingReconstruction hadoop.hdfs.TestDecommissionWithStriped hadoop.hdfs.server.namenode.snapshot.TestSnapshot hadoop.hdfs.TestSetrepIncreasing hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling hadoop.hdfs.TestFileChecksumCompositeCrc hadoop.hdfs.TestDFSStorageStateRecovery hadoop.hdfs.server.balancer.TestBalancer hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.TestCrcCorruption hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot hadoop.hdfs.TestErasureCodingPolicyWithSnapshot hadoop.hdfs.TestDFSClientFailover hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy hadoop.hdfs.server.namenode.TestStorageRestore hadoop.hdfs.TestFileCreationEmpty hadoop.hdfs.tools.TestECAdmin hadoop.hdfs.TestFSOutputSummer hadoop.hdfs.TestQuotaAllowOwner hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA hadoop.hdfs.tools.TestViewFSStoragePolicyCommands hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot hadoop.hdfs.TestReconstructStripedFile hadoop.hdfs.TestFileAppend hadoop.hdfs.server.namenode.TestFileTruncate hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.namenode.TestAddStripedBlocks hadoop.hdfs.server.datanode.TestDataNodeMetrics hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor hadoop.hdfs.TestDecommissionWithBackoffMonitor hadoop.hdfs.server.namenode.TestFSImage hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer hadoop.hdfs.TestMultiThreadedHflush hadoop.hdfs.TestFileCorruption hadoop.hdfs.server.namenode.TestINodeAttributeProvider hadoop.hdfs.TestFileAppend4 hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd hadoop.hdfs.server.namenode.TestStripedINodeFile hadoop.hdfs.server.namenode.TestFileContextXAttr hadoop.hdfs.TestFileCreationDelete hadoop.hdfs.server.namenode.TestLargeDirectoryDelete hadoop.hdfs.TestErasureCodingExerciseAPIs hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile hadoop.hdfs.server.namenode.snapshot.TestGetContentSummaryWithSnapshot hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing hadoop.hdfs.TestReadStripedFileWithMissingBlocks hadoop.hdfs.server.namenode.TestFSDirectory hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData hadoop.hdfs.TestReadStripedFileWithDNFailure hadoop.hdfs.server.datanode.TestBatchIbr hadoop.hdfs.TestClientProtocolForPipelineRecovery hadoop.hdfs.TestReplication hadoop.hdfs.tools.TestDFSAdmin hadoop.hdfs.server.namenode.TestFsck
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.registry.secure.TestSecureLogins hadoop.yarn.client.api.impl.TestAMRMProxy cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [324K] cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt [304K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [0] javadoc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/641/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K] https://b
[jira] [Created] (HDFS-15252) HttpFS : setWorkingDirectory should not accept invalid paths
hemanthboyina created HDFS-15252: Summary: HttpFS : setWorkingDirectory should not accept invalid paths Key: HDFS-15252 URL: https://issues.apache.org/jira/browse/HDFS-15252 Project: Hadoop HDFS Issue Type: Bug Reporter: hemanthboyina Assignee: hemanthboyina -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-15253) Set default throttle value on dfs.image.transfer.bandwidthPerSec
Karthik Palanisamy created HDFS-15253: - Summary: Set default throttle value on dfs.image.transfer.bandwidthPerSec Key: HDFS-15253 URL: https://issues.apache.org/jira/browse/HDFS-15253 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Karthik Palanisamy Assignee: Karthik Palanisamy The default value dfs.image.transfer.bandwidthPerSec is set to 0 so it can use maximum available bandwidth for fsimage transfers during checkpoint. I think we should throttle this. Many users were experienced namenode failover when transferring large image size along with fsimage replication on dfs.namenode.name.dir. eg. >25Gb. Thought to increase, dfs.image.transfer.bandwidthPerSec=52428800. (50 MB/s) dfs.namenode.checkpoint.txns=200 (Default is 1M, good to avoid frequent checkpoint. However, the default checkpoint runs every 6 hours once) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org