[jira] [Created] (HDFS-13050) [SPS] : Create start/stop script to start external SPS process.
Surendra Singh Lilhore created HDFS-13050: - Summary: [SPS] : Create start/stop script to start external SPS process. Key: HDFS-13050 URL: https://issues.apache.org/jira/browse/HDFS-13050 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Affects Versions: HDFS-10285 Reporter: Surendra Singh Lilhore Assignee: Surendra Singh Lilhore As part of this Jira we will main class for SPS and modify the \{{hadoop-daemon.sh}} to start external SPS. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/113/ [Jan 22, 2018 8:15:31 AM] (aajisaka) HADOOP-15181. Typo in SecureMode.md -1 overall The following subsystems voted -1: asflicense unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Unreaped Processes : hadoop-hdfs:41 bkjournal:8 hadoop-yarn-client:6 hadoop-yarn-applications-distributedshell:1 hadoop-mapreduce-client-jobclient:2 hadoop-distcp:4 hadoop-archives:1 hadoop-extras:1 Failed junit tests : hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing hadoop.hdfs.server.namenode.ha.TestPipelinesFailover hadoop.hdfs.server.balancer.TestBalancer hadoop.hdfs.server.namenode.ha.TestHAMetrics hadoop.hdfs.server.namenode.TestSecurityTokenEditLog hadoop.hdfs.server.namenode.TestFileLimit hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots hadoop.hdfs.server.namenode.TestFSImageWithAcl hadoop.hdfs.server.federation.router.TestRouterRpc hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup hadoop.hdfs.server.namenode.TestEditLogAutoroll hadoop.hdfs.server.namenode.TestStreamFile hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages hadoop.hdfs.server.namenode.TestFsck hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot hadoop.hdfs.server.namenode.TestDecommissioningStatus hadoop.hdfs.server.namenode.TestAuditLogger hadoop.hdfs.server.namenode.TestTransferFsImage hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM hadoop.hdfs.server.mover.TestMover hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA hadoop.hdfs.server.federation.router.TestNamenodeHeartbeat hadoop.hdfs.server.namenode.TestCacheDirectives hadoop.hdfs.server.namenode.TestProtectedDirectories hadoop.hdfs.server.namenode.TestLargeDirectoryDelete hadoop.hdfs.server.namenode.TestXAttrConfigFlag hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication hadoop.hdfs.server.namenode.TestBackupNode hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination hadoop.hdfs.server.federation.router.TestRouterMountTable hadoop.hdfs.server.namenode.TestStartup hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters hadoop.hdfs.server.blockmanagement.TestNodeCount hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA hadoop.hdfs.server.namenode.ha.TestHAStateTransitions hadoop.hdfs.server.namenode.TestSaveNamespace hadoop.hdfs.server.namenode.TestNameNodeRpcServerMethods hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem hadoop.hdfs.server.namenode.TestDeadDatanode hadoop.hdfs.server.namenode.ha.TestEditLogTailer hadoop.hdfs.server.balancer.TestBalancerRPCDelay hadoop.hdfs.server.namenode.ha.TestStateTransitionFailure hadoop.hdfs.server.namenode.TestEditLogRace hadoop.hdfs.server.namenode.TestNameNodeResourceChecker hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots hadoop.hdfs.server.namenode.TestEditLogJournalFailures hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes hadoop.hdfs.server.namenode.TestNNThroughputBenchmark hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy hado
[jira] [Created] (HDFS-13051) dead lock occur when using async editlog
zhangwei created HDFS-13051: --- Summary: dead lock occur when using async editlog Key: HDFS-13051 URL: https://issues.apache.org/jira/browse/HDFS-13051 Project: Hadoop HDFS Issue Type: Bug Reporter: zhangwei -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13052) WebHDFS: Add support for snasphot diff
Lokesh Jain created HDFS-13052: -- Summary: WebHDFS: Add support for snasphot diff Key: HDFS-13052 URL: https://issues.apache.org/jira/browse/HDFS-13052 Project: Hadoop HDFS Issue Type: Task Reporter: Lokesh Jain Assignee: Lokesh Jain This Jira aims to implement snapshot diff operation for webHdfs filesystem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13053) Track time to process packet in Datanode
Íñigo Goiri created HDFS-13053: -- Summary: Track time to process packet in Datanode Key: HDFS-13053 URL: https://issues.apache.org/jira/browse/HDFS-13053 Project: Hadoop HDFS Issue Type: Bug Components: datanode Reporter: Íñigo Goiri We should track the time that each datanode takes to process a packet. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13054) Handling PathIsNotEmptyDirectoryException in DFSClient delete call
Nanda kumar created HDFS-13054: -- Summary: Handling PathIsNotEmptyDirectoryException in DFSClient delete call Key: HDFS-13054 URL: https://issues.apache.org/jira/browse/HDFS-13054 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Reporter: Nanda kumar Assignee: Nanda kumar In {{DFSClient#delete}} call, if we get {{RemoteException(PathIsNotEmptyDirectoryException)}} we should unwrap and throw {{PathIsNotEmptyDirectoryException}} to the caller. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13055) Aggregate usage statistics from datanodes
Ajay Kumar created HDFS-13055: - Summary: Aggregate usage statistics from datanodes Key: HDFS-13055 URL: https://issues.apache.org/jira/browse/HDFS-13055 Project: Hadoop HDFS Issue Type: Bug Reporter: Ajay Kumar Assignee: Ajay Kumar We collect variety of statistics in DataNodes and expose them via JMX. Aggregating some of the high level statistics which we are already collecting in {{DataNodeMetrics}} (like bytesRead,bytesWritten etc) over a configurable time window will create a central repository accessible via JMX and UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts
Dennis Huo created HDFS-13056: - Summary: Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts Key: HDFS-13056 URL: https://issues.apache.org/jira/browse/HDFS-13056 Project: Hadoop HDFS Issue Type: New Feature Components: datanode, distcp, erasure-coding, federation, hdfs Affects Versions: 3.0.0 Reporter: Dennis Huo FileChecksum was first introduced in [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are already stored as part of datanode metadata, and the MD5 approach is used to compute an aggregate value in a distributed manner, with individual datanodes computing the MD5-of-CRCs per-block in parallel, and the HDFS client computing the second-level MD5. A shortcoming of this approach which is often brought up is the fact that this FileChecksum is sensitive to the internal block-size and chunk-size configuration, and thus different HDFS files with different block/chunk settings cannot be compared. More commonly, one might have different HDFS clusters which use different block sizes, in which case any data migration won't be able to use the FileChecksum for distcp's rsync functionality or for verifying end-to-end data integrity (on top of low-level data integrity checks applied at data transfer time). This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 during the addition of checksum support for striped erasure-coded files; while there was some discussion of using CRC composability, it still ultimately settled on hierarchical MD5 approach, which also adds the problem that checksums of basic replicated files are not comparable to striped files. This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses CRC composition to remain completely chunk/block agnostic, and allows comparison between striped vs replicated files, between different HDFS instances, and possible even between HDFS and other external storage systems. This feature can also be added in-place to be compatible with existing block metadata, and doesn't need to change the normal path of chunk verification, so is minimally invasive. This also means even large preexisting HDFS deployments could adopt this feature to retroactively sync data. A detailed design document can be found here: https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/ [Jan 22, 2018 6:30:01 PM] (yufei) YARN-7755. Clean up deprecation messages for allocation increments in FS [Jan 22, 2018 9:33:38 PM] (eyang) YARN-7729. Add support for setting Docker PID namespace mode. [Jan 22, 2018 11:54:44 PM] (hanishakoneru) HADOOP-15121. Encounter NullPointerException when using [Jan 23, 2018 12:02:32 AM] (hanishakoneru) HDFS-13023. Journal Sync does not work on a secure cluster. Contributed -1 overall The following subsystems voted -1: asflicense findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api org.apache.hadoop.yarn.api.records.Resource.getResources() may expose internal representation by returning Resource.resources At Resource.java:by returning Resource.resources At Resource.java:[line 234] Failed junit tests : hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.TestReadStripedFileWithMissingBlocks hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.yarn.server.nodemanager.containermanager.TestContainerManager hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesConfigurationMutation hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens hadoop.yarn.server.resourcemanager.webapp.TestRMWebappAuthentication hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification hadoop.yarn.server.TestDiskFailures hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapreduce.lib.output.TestJobOutputCommitter hadoop.mapreduce.v2.TestMROldApiJobs hadoop.mapreduce.v2.TestUberAM hadoop.mapred.TestMRTimelineEventHandling hadoop.mapred.TestJobCleanup cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-compile-javac-root.txt [280K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/whitespace-eol.txt [9.2M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/whitespace-tabs.txt [292K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/diff-javadoc-javadoc-root.txt [760K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [452K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [1.1M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [380K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/666/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapre