Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/ [Jul 4, 2016 4:11:56 AM] (aajisaka) HDFS-10572. Fix TestOfflineEditsViewer#testGenerated. Contributed by -1 overall The following subsystems voted -1: unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ha.TestZKFailoverController hadoop.fs.viewfs.TestViewFileSystemHdfs hadoop.hdfs.server.namenode.TestEditLog hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.TestRMAdminService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.cli.TestLogsCLI hadoop.yarn.client.api.impl.TestYarnClient cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/diff-compile-javac-root.txt [168K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/diff-patch-pylint.txt [16K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/diff-javadoc-javadoc-root.txt [2.3M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [120K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [148K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [56K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/84/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [124K] Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13339) MetricsSourceAdapter#updateAttrCache may throw NPE due to NULL lastRecs
Yongjun Zhang created HADOOP-13339: -- Summary: MetricsSourceAdapter#updateAttrCache may throw NPE due to NULL lastRecs Key: HADOOP-13339 URL: https://issues.apache.org/jira/browse/HADOOP-13339 Project: Hadoop Common Issue Type: Bug Reporter: Yongjun Zhang Assignee: Yongjun Zhang The for loop below may find lastRecs NULL {code} private int updateAttrCache() { LOG.debug("Updating attr cache..."); int recNo = 0; int numMetrics = 0; for (MetricsRecordImpl record : lastRecs) { for (MetricsTag t : record.tags()) { setAttrCacheTag(t, recNo); ++numMetrics; } for (AbstractMetric m : record.metrics()) { setAttrCacheMetric(m, recNo); ++numMetrics; } ++recNo; } LOG.debug("Done. # tags & metrics="+ numMetrics); return numMetrics; } {code} and throws NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13339) MetricsSourceAdapter#updateAttrCache may throw NPE due to NULL lastRecs
[ https://issues.apache.org/jira/browse/HADOOP-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa resolved HADOOP-13339. - Resolution: Duplicate Dup of HADOOP-11361. Closing this. > MetricsSourceAdapter#updateAttrCache may throw NPE due to NULL lastRecs > --- > > Key: HADOOP-13339 > URL: https://issues.apache.org/jira/browse/HADOOP-13339 > Project: Hadoop Common > Issue Type: Bug >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > > The for loop below may find lastRecs NULL > {code} > private int updateAttrCache() { > LOG.debug("Updating attr cache..."); > int recNo = 0; > int numMetrics = 0; > for (MetricsRecordImpl record : lastRecs) { > for (MetricsTag t : record.tags()) { > setAttrCacheTag(t, recNo); > ++numMetrics; > } > for (AbstractMetric m : record.metrics()) { > setAttrCacheMetric(m, recNo); > ++numMetrics; > } > ++recNo; > } > LOG.debug("Done. # tags & metrics="+ numMetrics); > return numMetrics; > } > {code} > and throws NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13340) Compress Hadoop Archive output
Duc Le Tu created HADOOP-13340: -- Summary: Compress Hadoop Archive output Key: HADOOP-13340 URL: https://issues.apache.org/jira/browse/HADOOP-13340 Project: Hadoop Common Issue Type: New Feature Components: tools Affects Versions: 2.5.0 Reporter: Duc Le Tu Why Hadoop Archive tool cannot compress output like other map-reduce job? I used some options like -D mapreduce.output.fileoutputformat.compress=true -D mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec but it's not work. Did I wrong somewhere? If not, please support option for compress output of Hadoop Archive tool, it's very neccessary for data retention for everyone (small files problem and compress data). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPT; replace with HADOOP_(command)_OPT
Allen Wittenauer created HADOOP-13341: - Summary: Deprecate HADOOP_SERVERNAME_OPT; replace with HADOOP_(command)_OPT Key: HADOOP-13341 URL: https://issues.apache.org/jira/browse/HADOOP-13341 Project: Hadoop Common Issue Type: Improvement Components: scripts Affects Versions: 3.0.0-alpha1 Reporter: Allen Wittenauer Big features like YARN-2928 demonstrate that even senior level Hadoop developers forget that daemons need a custom _OPT env var. We can replace all of the custom vars with generic handling just like we do for the username check. For example, today: HADOOP_NAMENODE_OPT would become HADOOP_namenode_OPT But if I wanted custom distcp options, there is no equivalent. But if the command replacement mode was, then HADOOP_distcp_OPT would automatically work. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org