Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/ No changes -1 overall The following subsystems voted -1: asflicense findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File, long) may fail to clean up java.io.InputStream on checked exception Obligation to clean up resource created at BlockPoolSlice.java:clean up java.io.InputStream on checked exception Obligation to clean up resource created at BlockPoolSlice.java:[line 720] is not discharged Failed junit tests : hadoop.yarn.server.timeline.webapp.TestTimelineWebServices hadoop.yarn.server.TestDiskFailures hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.TestMiniYarnClusterNodeUtilization cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/diff-compile-javac-root.txt [168K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/diff-patch-shellcheck.txt [28K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/whitespace-tabs.txt [1.3M] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [4.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [316K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/245/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/ No changes -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.TestBlockStoragePolicy hadoop.hdfs.TestHdfsAdmin hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.webapp.TestTimelineWebServices hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.timelineservice.storage.common.TestRowKeys hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters hadoop.yarn.server.timelineservice.storage.common.TestSeparator hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-compile-root.txt [172K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-compile-root.txt [172K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-compile-root.txt [172K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt [20K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [316K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt [28K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/176/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [20K] https://bu
[jira] [Created] (HDFS-11202) httpfs.sh will not run when temp dir does not exist
John Zhuge created HDFS-11202: - Summary: httpfs.sh will not run when temp dir does not exist Key: HDFS-11202 URL: https://issues.apache.org/jira/browse/HDFS-11202 Project: Hadoop HDFS Issue Type: Bug Components: httpfs Reporter: John Zhuge Assignee: John Zhuge Priority: Minor >From {{httpfs-localhost.2016-12-04.log}}: {noformat} INFO: ERROR: S01: Dir [/Users/jzhuge/hadoop2/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/temp] does not exist Dec 04, 2016 7:04:46 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class org.apache.hadoop.fs.http.server.HttpFSServerWebApp java.lang.RuntimeException: org.apache.hadoop.lib.server.ServerException: S01: Dir [/Users/jzhuge/hadoop2/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/temp] does not exist at org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:161) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at org.apache.catalina.core.StandardService.start(StandardService.java:525) at org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at org.apache.catalina.startup.Catalina.start(Catalina.java:595) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: org.apache.hadoop.lib.server.ServerException: S01: Dir [/Users/jzhuge/hadoop2/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/temp] does not exist at org.apache.hadoop.lib.server.Server.verifyDir(Server.java:400) at org.apache.hadoop.lib.server.Server.init(Server.java:349) at org.apache.hadoop.fs.http.server.HttpFSServerWebApp.init(HttpFSServerWebApp.java:100) at org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:158) ... 24 more {noformat} Create the temp dir manually, httpfs.sh works. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11203) Rename handling during re-encrypt EDEK
Xiao Chen created HDFS-11203: Summary: Rename handling during re-encrypt EDEK Key: HDFS-11203 URL: https://issues.apache.org/jira/browse/HDFS-11203 Project: Hadoop HDFS Issue Type: Task Components: encryption Reporter: Xiao Chen Assignee: Xiao Chen -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11204) The usage of formating ZK command should be updated
Yiqun Lin created HDFS-11204: Summary: The usage of formating ZK command should be updated Key: HDFS-11204 URL: https://issues.apache.org/jira/browse/HDFS-11204 Project: Hadoop HDFS Issue Type: Bug Components: documentation Affects Versions: 3.0.0-alpha2 Reporter: Yiqun Lin Assignee: Yiqun Lin Priority: Minor Currently the usage of the command {{hdfs zkfc -formatZK}} is not concrete. The introduction of parameters {{-force}}, {{-nonInteractive}} is missing in documentation and the help message. Actually, the function of these are very similar to namenode format. We should update these since it will let users know how to use this command easily. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org