[jira] [Created] (HDDS-460) Replication manager failed to import container data

2018-09-14 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-460:
---

 Summary: Replication manager failed to import container data
 Key: HDDS-460
 URL: https://issues.apache.org/jira/browse/HDDS-460
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Nilotpal Nandi


replication manager is not able to import downloaded container data on datanode 
failure. 

 

snippet of ozone.log

-
{noformat}
2018-09-14 09:34:05,249 [grpc-default-executor-139] INFO 
(GrpcReplicationClient.java:161) - Container is downloaded to 
/tmp/container-copy/container-14.tar.gz
 2018-09-14 09:34:05,389 [grpc-default-executor-131] ERROR 
(ReplicateContainerCommandHandler.java:164) - Can't import the downloaded 
container data id=8
 java.io.EOFException
 at 
org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream.read(GzipCompressorInputStream.java:241)
 at 
org.apache.commons.compress.archivers.tar.TarBuffer.readBlock(TarBuffer.java:224)
 at 
org.apache.commons.compress.archivers.tar.TarBuffer.readRecord(TarBuffer.java:195)
 at 
org.apache.commons.compress.archivers.tar.TarArchiveInputStream.read(TarArchiveInputStream.java:486)
 at 
org.apache.commons.compress.archivers.tar.TarArchiveInputStream.skip(TarArchiveInputStream.java:182)
 at 
org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:220)
 at 
org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.unpackContainerDescriptor(TarContainerPacker.java:200)
 at 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.ReplicateContainerCommandHandler.importContainer(ReplicateContainerCommandHandler.java:144)
 at 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.ReplicateContainerCommandHandler.lambda$handle$0(ReplicateContainerCommandHandler.java:121)
 at java.util.concurrent.CompletableFuture.uniAccept(CompletableFuture.java:656)
 at 
java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:632)
 at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
 at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
 at 
org.apache.hadoop.ozone.container.replication.GrpcReplicationClient$StreamDownloader.onCompleted(GrpcReplicationClient.java:160)
 at 
org.apache.ratis.shaded.io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:418)
 at 
org.apache.ratis.shaded.io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
 at 
org.apache.ratis.shaded.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
 at 
org.apache.ratis.shaded.io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
 at 
org.apache.ratis.shaded.io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684)
 at 
org.apache.ratis.shaded.io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
 at 
org.apache.ratis.shaded.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
 at 
org.apache.ratis.shaded.io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
 at 
org.apache.ratis.shaded.io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:403)
 at 
org.apache.ratis.shaded.io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
 at 
org.apache.ratis.shaded.io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
 at 
org.apache.ratis.shaded.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
 at 
org.apache.ratis.shaded.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
 at 
org.apache.ratis.shaded.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
 at 
org.apache.ratis.shaded.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
 at 
org.apache.ratis.shaded.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-461) container remains in CLOSING state forever

2018-09-14 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-461:
---

 Summary: container remains in CLOSING state forever
 Key: HDDS-461
 URL: https://issues.apache.org/jira/browse/HDDS-461
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Nilotpal Nandi


Container id # 13's state is not changing from CLOSING to CLOSED.
{noformat}
[root@ctr-e138-1518143905142-459606-01-02 bin]# ./ozone scmcli info 13
raft.rpc.type = GRPC (default)
raft.grpc.message.size.max = 33554432 (custom)
raft.client.rpc.retryInterval = 300 ms (default)
raft.client.async.outstanding-requests.max = 100 (default)
raft.client.async.scheduler-threads = 3 (default)
raft.grpc.flow.control.window = 1MB (=1048576) (default)
raft.grpc.message.size.max = 33554432 (custom)
raft.client.rpc.request.timeout = 3000 ms (default)
Container id: 13
Container State: OPEN
Container Path: 
/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/13/metadata
Container Metadata:
LeaderID: ctr-e138-1518143905142-459606-01-03.hwx.site
Datanodes: 
[ctr-e138-1518143905142-459606-01-07.hwx.site,ctr-e138-1518143905142-459606-01-08.hwx.site,ctr-e138-1518143905142-459606-01-03.hwx.site]{noformat}
 

snippet of scmcli list :
{noformat}
{
 "state" : "CLOSING",
 "replicationFactor" : "THREE",
 "replicationType" : "RATIS",
 "allocatedBytes" : 4831838208,
 "usedBytes" : 4831838208,
 "numberOfKeys" : 0,
 "lastUsed" : 4391827471,
 "stateEnterTime" : 5435591457,
 "owner" : "f8332db1-b8b1-4077-a9ea-097033d074b7",
 "containerID" : 13,
 "deleteTransactionId" : 0,
 "containerOpen" : true
}{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13917) RBF: Successfully updated mount point message is coming if we update the mount entry by passing the nameservice id for which mount entry is not present

2018-09-14 Thread Soumyapn (JIRA)
Soumyapn created HDFS-13917:
---

 Summary: RBF: Successfully updated mount point message is coming 
if we update the mount entry by passing the nameservice id for which mount 
entry is not present
 Key: HDFS-13917
 URL: https://issues.apache.org/jira/browse/HDFS-13917
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Soumyapn


*Test Steps:*
 # Add the mount entry like "hdfs dfsrouteradmin -add /apps 
*{color:#14892c}hacluster{color}* /opt
 # Update the mount entry by giving {color:#14892c}*hacluster2 
"*{color:#33}hdfs dfsrouteradmin -update /apps 
*{color:#14892c}hacluster2{color}* /opt -readonly.{color}{color}

*{color:#14892c}{color:#33}Actual Result:{color}{color}*

{color:#14892c}{color:#33}Console message says "Successfully updated mount 
entry for /apps"{color}{color}

*{color:#14892c}{color:#33}Expected Result:{color}{color}*

{color:#14892c}{color:#33}This console message will be confusing and the 
user will be in a impression that the mount entry is readonly updated. But we 
have passed the nameservice for which the mount entry is not 
present.{color}{color}

{color:#14892c}{color:#33}Console message can be like *"There is no entries 
for the hacluster2 nameservice"* so that the user will have proper message on 
the update command executed.{color}{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-462) Optimize ContainerStateMap#getMatchingContainerIDs in SCM

2018-09-14 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-462:


 Summary: Optimize ContainerStateMap#getMatchingContainerIDs in SCM
 Key: HDDS-462
 URL: https://issues.apache.org/jira/browse/HDDS-462
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Nanda kumar
Assignee: Nanda kumar


While allocating a new block we spend a lot of time in 
{{ContainerStateMap#getMatchingContainerIDs}}. This call should be optimized to 
get better performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/

[Sep 13, 2018 4:04:49 AM] (fabbri) HADOOP-15426 Make S3guard client resilient 
to DDB throttle events and
[Sep 13, 2018 5:59:40 AM] (msingh) HDDS-420. putKey failing with 
KEY_ALLOCATION_ERROR. Contributed by
[Sep 13, 2018 11:42:09 AM] (tasanuma) HADOOP-12760. sun.misc.Cleaner has moved 
to a new location in OpenJDK 9.
[Sep 13, 2018 12:01:53 PM] (elek) HDDS-415. 'ozone om' with incorrect argument 
first logs all the
[Sep 13, 2018 12:17:21 PM] (sunilg) YARN-8630. ATSv2 REST APIs should honor 
filter-entity-list-by-user in
[Sep 13, 2018 1:31:07 PM] (msingh) HDDS-233. Update ozone to latest ratis 
snapshot
[Sep 13, 2018 2:21:35 PM] (wwei) YARN-8729. Node status updater thread could be 
lost after it is
[Sep 13, 2018 4:16:58 PM] (inigoiri) HDFS-13914. Fix DN UI logs link broken 
when https is enabled after
[Sep 13, 2018 6:28:54 PM] (jlowe) YARN-8680. YARN NM: Implement Iterable 
Abstraction for
[Sep 13, 2018 7:41:38 PM] (jlowe) MAPREDUCE-7133. History Server task attempts 
REST API returns invalid
[Sep 13, 2018 8:00:35 PM] (arp) HDDS-456. TestOzoneShell#init is breaking due 
to Null Pointer Exception.
[Sep 13, 2018 8:34:22 PM] (hanishakoneru) HDDS-414. Fix sbin/stop-ozone.sh to 
stop Ozone daemons. Contributed by
[Sep 13, 2018 9:22:56 PM] (weichiu) HDFS-13838. 
WebHdfsFileSystem.getFileStatus() won't return correct
[Sep 13, 2018 11:37:44 PM] (hanishakoneru) HDDS-423. Introduce an ozone 
specific log4j.properties. Contributed by
[Sep 14, 2018 2:02:27 AM] (brahma) HADOOP-15733. Correct the log when Invalid 
emptier Interval configured.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 195] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 195] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/896/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/

[jira] [Created] (HDDS-463) Fix the release packaging of the ozone distribution

2018-09-14 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-463:
-

 Summary: Fix the release packaging of the ozone distribution
 Key: HDDS-463
 URL: https://issues.apache.org/jira/browse/HDDS-463
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


I found a few small problem during my test to release ozone:

1. The source assembly file still contains the ancient hdsl string in the name
2. The README of the binary distribution is confusing (this is Hadoop)
3. the binary distribution contains unnecessary test and source jar files
4. (Thanks to [~bharatviswa]): The log message after the dist creation is bad 
(doesn't contain the restored version tag in the name)

I combined these problems as all of the problems could be solved with very 
small modifications...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-14 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-464:


 Summary: Fix TestCloseContainerHandlingByClient
 Key: HDDS-464
 URL: https://issues.apache.org/jira/browse/HDDS-464
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain


testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
AssertionError respectively.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-465) Suppress group mapping lookup warnings for ozone

2018-09-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-465:
---

 Summary: Suppress group mapping lookup warnings for ozone
 Key: HDDS-465
 URL: https://issues.apache.org/jira/browse/HDDS-465
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 0.2.1


change 
log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR for 
docker-config and log4j.properties. We will remove this when the authentication 
is fully implemented for ozone. Otherwise, we will see the following warnings 
upon volume creation. 

{code}

hadoop@dd71731d0154:~$ ozone oz volume create /volume1 --root -u test

2018-09-14 18:06:03 WARN  ShellBasedUnixGroupsMapping:210 - unable to return 
groups for user test

PartialGroupNameException The user name 'test' is not found. id: ‘test’: no 
such user

id: ‘test’: no such user

 

 at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)

...

2018-09-14 18:06:03 INFO  RpcClient:206 - Creating Volume: volume1, with test 
as owner and quota set to 1152921504606846976 bytes.

{code}

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-466) Handle null in argv of StorageContainerManager#createSCM

2018-09-14 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-466:


 Summary: Handle null in argv of StorageContainerManager#createSCM
 Key: HDDS-466
 URL: https://issues.apache.org/jira/browse/HDDS-466
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar
Assignee: Nanda kumar


{{StorageContainerManager#createSCM}} takes {{String[]}} as an argument and the 
same is used for constructing startup message, we have to check if the value 
passed is null before constructing the startup message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-467) TestKeys#runTestPutAndListKey breaks with NPE

2018-09-14 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-467:
--

 Summary: TestKeys#runTestPutAndListKey breaks with NPE
 Key: HDDS-467
 URL: https://issues.apache.org/jira/browse/HDDS-467
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


{noformat}
[ERROR] testPutAndListKey(org.apache.hadoop.ozone.web.client.TestKeysRatis)  
Time elapsed: 7.767 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.ozone.web.client.TestKeys.runTestPutAndListKey(TestKeys.java:500)
at 
org.apache.hadoop.ozone.web.client.TestKeysRatis.testPutAndListKey(TestKeysRatis.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-468) Add version number to plugin and ozone file system jar

2018-09-14 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-468:
---

 Summary: Add version number to plugin and ozone file system jar
 Key: HDDS-468
 URL: https://issues.apache.org/jira/browse/HDDS-468
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Below 2 jars are copied to distribution without any ozone version.

hadoop-ozone-datanode-plugin.jar

hadoop-ozone-filesystem.jar

 

Ozone version number should be appended at the end like other ozone jars have.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-469) Rename 'ozone oz' to 'ozone fs'

2018-09-14 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-469:
--

 Summary: Rename 'ozone oz' to 'ozone fs'
 Key: HDDS-469
 URL: https://issues.apache.org/jira/browse/HDDS-469
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


Ozone shell volume/bucket/key sub-commands are invoked using _ozone oz._ 

_ozone oz_ sounds repetitive. Instead we can replace it with _ozone fs_. This 
is also consistent with _hadoop fs_ which invokes Hadoop filesystem commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org