Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/ [Sep 19, 2016 9:03:06 AM] (varunsaxena) YARN-5577. [Atsv2] Document object passing in infofilters with an [Sep 19, 2016 9:08:01 AM] (jianhe) YARN-3141. Improve locks in [Sep 19, 2016 6:17:03 PM] (wang) HDFS-10868. Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED. [Sep 19, 2016 8:31:35 PM] (jlowe) YARN-5540. Scheduler spends too much time looking at empty priorities. [Sep 19, 2016 10:16:47 PM] (cnauroth) HADOOP-13169. Randomize file list in SimpleCopyListing. Contributed by [Sep 20, 2016 4:44:42 AM] (xiao) HDFS-10875. Optimize du -x to cache intermediate result. Contributed by -1 overall The following subsystems voted -1: asflicense unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.server.datanode.TestLargeBlockReport hadoop.hdfs.server.datanode.TestIncrementalBlockReports hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.TestApplicationClientProtocolOnHA hadoop.yarn.applications.distributedshell.TestDistributedShell cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/diff-compile-javac-root.txt [172K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/diff-patch-pylint.txt [16K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-javadoc-root.txt [3.1M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [144K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [40K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [56K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [72K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [124K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/170/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10876) Dispatcher#dispatch should log IOException stacktrace
Wei-Chiu Chuang created HDFS-10876: -- Summary: Dispatcher#dispatch should log IOException stacktrace Key: HDFS-10876 URL: https://issues.apache.org/jira/browse/HDFS-10876 Project: Hadoop HDFS Issue Type: Improvement Components: balancer & mover Affects Versions: 2.6.0 Reporter: Wei-Chiu Chuang Priority: Trivial This error logging should be improved. A warning log should record the stacktrace as well. {code:title=Dispatcher#dispatch} try { ... } catch (IOException e) { LOG.warn("Failed to move " + this + ": " + e.getMessage()); ... } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10878) TestDFSClientRetries#testIdempotentAllocateBlockAndClose throwing ConcurrentModificationException
Rushabh S Shah created HDFS-10878: - Summary: TestDFSClientRetries#testIdempotentAllocateBlockAndClose throwing ConcurrentModificationException Key: HDFS-10878 URL: https://issues.apache.org/jira/browse/HDFS-10878 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.7.3 Reporter: Rushabh S Shah Assignee: Rushabh S Shah This failed in out internal build {noformat} java.util.ConcurrentModificationException: null at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901) at java.util.ArrayList$Itr.next(ArrayList.java:851) at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendUCParts(BlockInfoContiguousUnderConstruction.java:396) at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.appendStringTo(BlockInfoContiguousUnderConstruction.java:382) at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction.toString(BlockInfoContiguousUnderConstruction.java:375) at java.lang.String.valueOf(String.java:2982) at java.lang.StringBuilder.append(StringBuilder.java:131) at org.apache.hadoop.hdfs.protocol.ExtendedBlock.toString(ExtendedBlock.java:121) at com.google.common.base.Joiner.toString(Joiner.java:533) at com.google.common.base.Joiner.appendTo(Joiner.java:124) at com.google.common.base.Joiner.appendTo(Joiner.java:181) at com.google.common.base.Joiner.join(Joiner.java:237) at com.google.common.base.Joiner.join(Joiner.java:226) at com.google.common.base.Joiner.join(Joiner.java:245) at org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:485) at org.apache.hadoop.hdfs.TestDFSClientRetries$3.answer(TestDFSClientRetries.java:477) at org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31) at org.mockito.internal.MockHandler.handle(MockHandler.java:97) at org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$$EnhancerByMockitoWithCGLIB$$cca97ed1.complete() at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2303) at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2279) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2243) at org.apache.hadoop.hdfs.TestDFSClientRetries.testIdempotentAllocateBlockAndClose(TestDFSClientRetries.java:507) {noformat} Its getting NPE in following Log message {code:title=TestDFSClientRetries.java|borderStyle=solid} @Test public void testIdempotentAllocateBlockAndClose() throws Exception { ... public Boolean answer(InvocationOnMock invocation) throws Throwable { // complete() may return false a few times before it returns // true. We want to wait until it returns true, and then // make it retry one more time after that. LOG.info("Called complete(: " + Joiner.on(",").join(invocation.getArguments()) + ")"); ... } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/100/ [Sep 19, 2016 6:17:03 PM] (wang) HDFS-10868. Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED. [Sep 19, 2016 8:31:35 PM] (jlowe) YARN-5540. Scheduler spends too much time looking at empty priorities. [Sep 19, 2016 10:16:47 PM] (cnauroth) HADOOP-13169. Randomize file list in SimpleCopyListing. Contributed by [Sep 20, 2016 4:44:42 AM] (xiao) HDFS-10875. Optimize du -x to cache intermediate result. Contributed by [Sep 20, 2016 7:03:31 AM] (jianhe) YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ipc.TestRPCWaitForProxy hadoop.hdfs.TestWriteReadStripedFile hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.tools.TestDFSAdminWithHA hadoop.hdfs.server.namenode.TestNestedEncryptionZones hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.timelineservice.storage.common.TestRowKeys hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters hadoop.yarn.server.timelineservice.storage.common.TestSeparator hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.TestRMAdminService hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.TestRMFailover hadoop.yarn.client.api.impl.TestNMClient hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.mapred.TestMROpportunisticMaps compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/100/artifact/out/patch-compile-root.txt [308K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/100/artifact/out/patch-compile-root.txt [308K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/100/artifact/out/patch-compile-root.txt [308K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/100/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [120K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/100/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [200K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/100/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/100/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/100/artifact/out/patch-unit-hadoop-yarn-project_
[jira] [Created] (HDFS-10879) TestEncryptionZonesWithKMS#testReadWrite fails intermitently
Xiao Chen created HDFS-10879: Summary: TestEncryptionZonesWithKMS#testReadWrite fails intermitently Key: HDFS-10879 URL: https://issues.apache.org/jira/browse/HDFS-10879 Project: Hadoop HDFS Issue Type: Bug Reporter: Xiao Chen Assignee: Xiao Chen {noformat} Error Message: Key was rolled, versions should be different. Actual: test_key@0 Stack Trace: java.lang.AssertionError: Key was rolled, versions should be different. Actual: test_key@0 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failEquals(Assert.java:185) at org.junit.Assert.assertNotEquals(Assert.java:161) at org.apache.hadoop.hdfs.TestEncryptionZones.testReadWrite(TestEncryptionZones.java:726) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10880) Federation Mount Table State Store internal API
Jason Kace created HDFS-10880: - Summary: Federation Mount Table State Store internal API Key: HDFS-10880 URL: https://issues.apache.org/jira/browse/HDFS-10880 Project: Hadoop HDFS Issue Type: Sub-task Components: fs Reporter: Jason Kace Assignee: Jason Kace The Federation Mount Table State encapsulates the mapping of file paths in the global namespace to a specific NN(nameservice) and local NN path. The mount table is shared by all router instances and represents a unified view of the global namespace. The state store API for the mount table allows the related records to be queried, updated and deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10881) Federation State Store Driver API
Jason Kace created HDFS-10881: - Summary: Federation State Store Driver API Key: HDFS-10881 URL: https://issues.apache.org/jira/browse/HDFS-10881 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Jason Kace The API interfaces and minimal classes required to support a state store data backend such as ZooKeeper or a file system. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10882) Federation State Store Interface API
Jason Kace created HDFS-10882: - Summary: Federation State Store Interface API Key: HDFS-10882 URL: https://issues.apache.org/jira/browse/HDFS-10882 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Jason Kace The minimal classes and interfaces required to create state store internal data APIs using protobuf serialization. This is a pre-requisite for higher level APIs such as the registration API and the mount table API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-9333) Some tests using MiniDFSCluster errored complaining port in use
[ https://issues.apache.org/jira/browse/HDFS-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HDFS-9333. --- Resolution: Fixed Fix Version/s: 3.0.0-alpha2 2.8.0 Resolving since it looks like this was committed to trunk, branch-2, branch-2.8. Thanks for working on this [~iwasakims]! > Some tests using MiniDFSCluster errored complaining port in use > --- > > Key: HDFS-9333 > URL: https://issues.apache.org/jira/browse/HDFS-9333 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Reporter: Kai Zheng >Assignee: Masatake Iwasaki >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-9333.001.patch, HDFS-9333.002.patch, > HDFS-9333.003.patch > > > Ref. the following: > {noformat} > Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.483 sec > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped > testRead(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped) > Time elapsed: 11.021 sec <<< ERROR! > java.net.BindException: Port in use: localhost:49333 > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at > org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) > at > org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:884) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:821) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:675) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:883) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:862) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1555) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2015) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1996) > at > org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.doTestRead(TestBlockTokenWithDFS.java:539) > at > org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.testRead(TestBlockTokenWithDFSStriped.java:62) > {noformat} > Another one: > {noformat} > Tests run: 5, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 9.859 sec <<< > FAILURE! - in org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController > testFailoverAndBackOnNNShutdown(org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController) > Time elapsed: 0.41 sec <<< ERROR! > java.net.BindException: Problem binding to [localhost:10021] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:469) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:695) > at org.apache.hadoop.ipc.Server.(Server.java:2464) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:399) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:742) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:680) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:883) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:862) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1555) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1245) > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1014) > at > org.apache.hado
[jira] [Created] (HDFS-10883) `getTrashRoot`'s behavior is not consistent in DFS after enabling EZ.
Yuanbo Liu created HDFS-10883: - Summary: `getTrashRoot`'s behavior is not consistent in DFS after enabling EZ. Key: HDFS-10883 URL: https://issues.apache.org/jira/browse/HDFS-10883 Project: Hadoop HDFS Issue Type: Bug Reporter: Yuanbo Liu Assignee: Yuanbo Liu Let's say root path ("/") is the encryption zone, and there is a file called "/test" in root path. {code} dfs.getTrashRoot(new Path("/")) {code} returns "/user/$USER/.Trash", while {code} dfs.getTrashRoot(new Path("/test")) {code} returns "/.Trash/$USER". The second behavior is not correct. Since root path is the encryption zone, which means all files/directories in DFS are encrypted, it's more reasonable to return "/user/$USER/.Trash" no matter what the path is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org