Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/ [Feb 17, 2018 4:28:08 AM] (wangda) YARN-7328. ResourceUtils allows [Feb 17, 2018 4:28:55 AM] (wangda) HADOOP-14875. Create end user documentation from the compatibility [Feb 17, 2018 11:24:55 AM] (arun suresh) YARN-7918. Fix TestAMRMClientPlacementConstraints. (Gergely Novák via [Feb 17, 2018 3:00:28 PM] (rohithsharmaks) YARN-7919. Refactor timelineservice-hbase module into submodules. [Feb 18, 2018 8:31:23 AM] (rohithsharmaks) YARN-7937. Fix http method name in Cluster Application Timeout Update [Feb 18, 2018 1:19:39 PM] (aajisaka) HADOOP-15223. Replace Collections.EMPTY* with empty* when available -1 overall The following subsystems voted -1: findbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api org.apache.hadoop.yarn.api.records.Resource.getResources() may expose internal representation by returning Resource.resources At Resource.java:by returning Resource.resources At Resource.java:[line 234] Failed junit tests : hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate hadoop.hdfs.TestDFSStripedOutputStreamWithFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/diff-compile-javac-root.txt [280K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/whitespace-eol.txt [9.2M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/whitespace-tabs.txt [288K] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/diff-javadoc-javadoc-root.txt [760K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [320K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [48K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/696/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [8.0K] Powered by Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13169) Ambari UI deploy fails during startup of Ambari Metrics
Aravindan Vijayan created HDFS-13169: Summary: Ambari UI deploy fails during startup of Ambari Metrics Key: HDFS-13169 URL: https://issues.apache.org/jira/browse/HDFS-13169 Project: Hadoop HDFS Issue Type: Bug Reporter: Aravindan Vijayan {noformat} HDP version:HDP-3.0.0.0-702 Ambari version: 2.99.99.0-77 {noformat} /var/lib/ambari-agent/data/errors-52.txt: {noformat} Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py", line 90, in AmsCollector().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 371, in execute self.execute_prefix_function(self.command_name, 'post', env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 392, in execute_prefix_function method(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 434, in post_start raise Fail("Pid file {0} doesn't exist after starting of the component.".format(pid_file)) resource_management.core.exceptions.Fail: Pid file /var/run/ambari-metrics-collector//hbase-ams-master.pid doesn't exist after starting of the component. {noformat} /var/lib/ambari-agent/data/output-52.txt: {noformat} 2018-01-11 13:03:40,753 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-702 -> 3.0.0.0-702 2018-01-11 13:03:40,755 - Using hadoop conf dir: /usr/hdp/3.0.0.0-702/hadoop/conf 2018-01-11 13:03:40,884 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-702 -> 3.0.0.0-702 2018-01-11 13:03:40,885 - Using hadoop conf dir: /usr/hdp/3.0.0.0-702/hadoop/conf 2018-01-11 13:03:40,886 - Group['hdfs'] {} 2018-01-11 13:03:40,887 - Group['hadoop'] {} 2018-01-11 13:03:40,887 - Group['users'] {} 2018-01-11 13:03:40,887 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-01-11 13:03:40,890 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-01-11 13:03:40,891 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-01-11 13:03:40,892 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-01-11 13:03:40,893 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-01-11 13:03:40,893 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2018-01-11 13:03:40,894 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-01-11 13:03:40,894 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2018-01-11 13:03:40,895 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2018-01-11 13:03:40,895 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-01-11 13:03:40,896 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-01-11 13:03:40,897 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-01-11 13:03:40,897 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-01-11 13:03:40,898 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2018-01-11 13:03:40,903 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2018-01-11 13:03:40,903 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2018-01-11 13:03:40,904 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-01-11 13:03:40,905 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-01-11 13:03:40,906 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2018-01-11 13:03:40,913 - call returned (0, '1002') 2018-01-11 13:03:40,914 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2018-01-11 13:03:40,917 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase
[jira] [Created] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS
Stephen O'Donnell created HDFS-13170: Summary: Port webhdfs unmaskedpermission parameter to HTTPFS Key: HDFS-13170 URL: https://issues.apache.org/jira/browse/HDFS-13170 Project: Hadoop HDFS Issue Type: Improvement Reporter: Stephen O'Donnell HDFS-6962 fixed a long standing issue where default ACLs are not correctly applied to files when they are created from the hadoop shell. With this change, if you create a file with default ACLs against the parent directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result is: {code} # file: /test_acl/file_from_shell_off # owner: user1 # group: supergroup user::rw- user:user1:rwx #effective:r-- user:user2:rwx #effective:r-- group::r-x #effective:r-- group:users:rwx #effective:r-- mask::r-- other::r-- {code} And if you enable this, to fix the bug above, the result is as you would expect: {code} # file: /test_acl/file_from_shell # owner: user1 # group: supergroup user::rw- user:user1:rwx #effective:rw- user:user2:rwx #effective:rw- group::r-x #effective:r-- group:users:rwx #effective:rw- mask::rw- other::r-- {code} If I then create a file over HTTPFS or webHDFS, the behaviour is not the same as above: {code} # file: /test_acl/default_permissions # owner: user1 # group: supergroup user::rwx user:user1:rwx #effective:r-x user:user2:rwx #effective:r-x group::r-x group:users:rwx #effective:r-x mask::r-x other::r-x {code} Notice the mask is set to r-x and this remove the write permission on the new file. As part of HDFS-6962 a new parameter was added to webhdfs 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the same behaviour as when a file is written from the CLI: {code} curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream" "http://host-10-17-103-28.coe.cloudera.comnamenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE&user.name=user1&namenoderpcaddress=namenode:8020&overwrite=false&unmaskedpermission=770"; # file: /test_acl/unmasked__770 # owner: user1 # group: supergroup user::rwx user:user1:rwx user:user2:rwx group::r-x group:users:rwx mask::rwx other::--- {code} However, this parameter was never ported to HTTPFS. This Jira is to replicate the same changes to HTTPFS so this parameter is available there too. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/140/ [Feb 18, 2018 7:56:10 AM] (rohithsharmaks) YARN-7919. Refactor timelineservice-hbase module into submodules. [Feb 18, 2018 8:38:50 AM] (rohithsharmaks) YARN-7937. Fix http method name in Cluster Application Timeout Update -1 overall The following subsystems voted -1: asflicense findbugs mvnsite unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Unreaped Processes : hadoop-common:1 hadoop-hdfs:25 bkjournal:5 hadoop-yarn-server-nodemanager:1 hadoop-yarn-server-timelineservice:1 hadoop-yarn-server-resourcemanager:1 hadoop-yarn-client:8 hadoop-mapreduce-client-jobclient:11 hadoop-archives:1 hadoop-distcp:5 hadoop-yarn-applications-distributedshell:1 Failed junit tests : hadoop.hdfs.TestBlocksScheduledCounter hadoop.hdfs.server.datanode.TestTransferRbw hadoop.hdfs.server.datanode.TestBlockRecovery hadoop.hdfs.TestMiniDFSCluster hadoop.yarn.server.nodemanager.webapp.TestNMWebServer hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestDockerContainerRuntime hadoop.yarn.server.nodemanager.TestNodeStatusUpdater hadoop.yarn.server.TestDiskFailures hadoop.mapred.TestJavaSerialization hadoop.mapred.TestClientRedirect hadoop.mapred.TestReduceFetch hadoop.mapred.TestLocalJobSubmission hadoop.mapred.TestLazyOutput hadoop.mapred.TestJobSysDirWithDFS hadoop.tools.TestIntegration hadoop.tools.TestDistCpViewFs hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService Timed out junit tests : org.apache.hadoop.log.TestLogLevel org.apache.hadoop.hdfs.TestLeaseRecovery2 org.apache.hadoop.hdfs.TestDatanodeRegistration org.apache.hadoop.hdfs.TestDFSClientFailover org.apache.hadoop.hdfs.web.TestWebHdfsTokens org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade org.apache.hadoop.hdfs.TestFileAppendRestart org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter org.apache.hadoop.hdfs.TestDFSMkdirs org.apache.hadoop.hdfs.TestDFSOutputStream org.apache.hadoop.hdfs.TestDatanodeReport org.apache.hadoop.hdfs.web.TestWebHDFS org.apache.hadoop.hdfs.web.TestWebHDFSXAttr org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs org.apache.hadoop.hdfs.TestDistributedFileSystem org.apache.hadoop.hdfs.web.TestWebHDFSForHA org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication org.apache.hadoop.hdfs.TestDFSShell org.apache.hadoop.hdfs.web.TestWebHDFSAcl org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerResync org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServices org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy org.apache.hadoop.yarn.client.TestRMFailover org.apache.hadoop.yarn.client.cli.TestYarnCLI org.apache.hadoop.yarn.client.TestApplicationMasterServiceProtocolOnHA org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA org.apache.hadoop.yarn.client.api.impl.TestYarnClientWithReservation org.apache.hadoop.yarn.client.api.impl.TestYarnClient
[jira] [Created] (HDFS-13171) Handle Deletion of nodes in SnasphotSkipList
Shashikant Banerjee created HDFS-13171: -- Summary: Handle Deletion of nodes in SnasphotSkipList Key: HDFS-13171 URL: https://issues.apache.org/jira/browse/HDFS-13171 Project: Hadoop HDFS Issue Type: Bug Components: snapshots Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee This Jira will handle deletion of skipListNodes from DirectoryDiffList . If a node has multiple levels, the list needs to be balanced .If the node is uni level, no balancing of the list is required. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13172) Impelement task Manager to handle create and delete multiLevel nodes is SnapshotSkipList
Shashikant Banerjee created HDFS-13172: -- Summary: Impelement task Manager to handle create and delete multiLevel nodes is SnapshotSkipList Key: HDFS-13172 URL: https://issues.apache.org/jira/browse/HDFS-13172 Project: Hadoop HDFS Issue Type: Bug Components: snapshots Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13173) Replace ArrayList with DirectoryDiffList(SnapshotSkipList) to store DirectoryDiffs
Shashikant Banerjee created HDFS-13173: -- Summary: Replace ArrayList with DirectoryDiffList(SnapshotSkipList) to store DirectoryDiffs Key: HDFS-13173 URL: https://issues.apache.org/jira/browse/HDFS-13173 Project: Hadoop HDFS Issue Type: Bug Reporter: Shashikant Banerjee Assignee: Shashikant Banerjee This Jira will replace the existing ArrayList with DirectoryDiffList to store directory diffs for snapshots based on the config value of skipInterval. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org