[jira] [Created] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-600:
-

 Summary: Mapreduce example fails with 
java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
character
 Key: HDDS-600
 URL: https://issues.apache.org/jira/browse/HDDS-600
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Set up a hadoop cluster where ozone is also installed. Ozone can be referenced 
via o3://xx.xx.xx.xx:9889
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
o3://xx.xx.xx.xx:9889/volume1/
2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
"volumeName" : "volume1",
"bucketName" : "bucket1",
"createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
"acls" : [ {
"type" : "USER",
"name" : "root",
"rights" : "READ_WRITE"
}, {
"type" : "GROUP",
"name" : "root",
"rights" : "READ_WRITE"
} ],
"versioning" : "DISABLED",
"storageType" : "DISK"
} ]
[root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
o3://xx.xx.xx.xx:9889/volume1/bucket1
2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
"version" : 0,
"md5hash" : null,
"createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
"modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
"size" : 0,
"keyName" : "mr_job_dir"
} ]
[root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
Hdfs is also set fine as below
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
/tmp/mr_jobs/input/
Found 1 items
-rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
/tmp/mr_jobs/input/wordcount_input_1.txt
[root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
Now try to run Mapreduce example job against ozone o3:
{code:java}
[root@ctr-e138-1518143905142-510793-01-02 ~]# 
/usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ 
o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
character : :
at 
org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
18/10/09 07:15:39 INFO conf.Configuration: Removed undeclared tags:
[root@ctr-e138-1518143905142-510793-01-02 ~]#
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Created] (HDDS-601) SCMException: No such datanode

2018-10-09 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HDDS-601:
---

 Summary: SCMException: No such datanode
 Key: HDDS-601
 URL: https://issues.apache.org/jira/browse/HDDS-601
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.3.0
Reporter: Soumitra Sulav


Encountered below exception after I changed a configuration in ozone-site and 
restarted SCM and Datanode :

Ozone Cluster : 1 SCM, 1 OM, 3 DNs
{code:java}
2018-10-04 09:35:59,716 INFO org.apache.hadoop.hdds.server.BaseHttpServer: HTTP 
server of SCM is listening at http://0.0.0.0:9876
2018-10-04 09:36:03,618 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
SCM receive heartbeat from unregistered datanode 
127a8e17-b2df-4663-924c-1a6909adb293{ip: 172.22.119.19, host: 
hcatest-2.openstacklocal}
2018-10-04 09:36:09,063 WARN org.apache.hadoop.hdds.scm.node.SCMNodeManager: 
SCM receive heartbeat from unregistered datanode 
82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
hcatest-3.openstacklocal}
2018-10-04 09:36:09,083 ERROR 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler: Error on 
processing container report from datanode 
82555af0-a1f9-447a-ad40-c524ba6e1317{ip: 172.22.119.190, host: 
hcatest-3.openstacklocal}
org.apache.hadoop.hdds.scm.exceptions.SCMException: No such datanode
 at 
org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap.setContainersForDatanode(Node2ContainerMap.java:82)
 at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
 at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:45)
 at 
org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-602) Release Ozone 0.3.0

2018-10-09 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-602:
-

 Summary: Release Ozone 0.3.0
 Key: HDDS-602
 URL: https://issues.apache.org/jira/browse/HDDS-602
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Elek, Marton


Similar to HDDS-214 I open this issue to discuss all of the release related 
issue in this jira.

The jira id also could be used in the commit message of the technical commits 
(such as tag/version bump)

As a summary, ozone 0.3.0 could be release in the same way as ozone 0.2.1. We 
don't need to upload the artifacts to the maven repository (it requires 
additional work).

Roadmap is here: 
[https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Road+Map]

Branch is ozone-0.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [OZONE] Community calls

2018-10-09 Thread Elek, Marton
As a memo:

The discussion was mostly about the current state of the Ozone
development and the roadmap which is available from here:

https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Road+Map


The next release will be 0.3.0 with s3 compatible rest interface, and
more stable ozonefs (=more testing).

ozone-0.3 branch is cut yesterday evening. Development of 0.4.0 (with
security features) will be continued on the trunk branch.

I also opened the jira item for the 0.3.0 release at HDDS-602. If you
have any concerns/idea about the technical aspect of the release, please
comment there.

Any feedback is welcome.

Thanks,
Marton

On 10/5/18 5:51 PM, Elek, Marton wrote:
> 
> Hi everybody,
> 
> 
> We start a new community call series about Apache Hadoop Ozone. It's an
> informal discussion about the current items, short-term plans,
> directions and contribution possibilities.
> 
> Please join if you are interested or have questions about Ozone.
> 
> For more details, please check:
> 
> https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls
> 
> Marton
> 
> ps: As it's written in the wiki, this is not a replacement of the
> mailing lists. All main proposals/decisions will be published to the
> mailing list/wiki to generate further discussion.
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-603) Add BlockCommitSequenceId field per Container and expose it in Container Reports

2018-10-09 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-603:


 Summary: Add BlockCommitSequenceId field per Container and expose 
it in Container Reports
 Key: HDDS-603
 URL: https://issues.apache.org/jira/browse/HDDS-603
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.3.0


HDDS-450 adds a blockCommitSequenceId filed per block commits in container Db. 
The blockCommitSequenceId now needs to be updated per container replica and the 
same needs to be reported to SCM via container reports. This Jira aims to 
address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/

[Oct 8, 2018 6:24:56 AM] (sunilg) YARN-7825. [UI2] Maintain constant horizontal 
application info bar for
[Oct 8, 2018 2:17:42 PM] (elek) HDDS-521. Implement DeleteBucket REST endpoint. 
Contributed by Bharat
[Oct 8, 2018 4:40:37 PM] (haibochen) YARN-8659. RMWebServices returns only 
RUNNING apps when filtered with
[Oct 8, 2018 5:05:18 PM] (inigoiri) YARN-8843. updateNodeResource does not 
support units for memory.
[Oct 8, 2018 5:56:47 PM] (eyang) YARN-8763.  Added node manager websocket API 
for accessing containers.  




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-javadoc-javadoc-root.t

[jira] [Created] (HDFS-13978) Review of DiskBalancer

2018-10-09 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13978:
--

 Summary: Review of DiskBalancer
 Key: HDFS-13978
 URL: https://issues.apache.org/jira/browse/HDFS-13978
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, hdfs
Affects Versions: 3.2.0
 Environment: * Use ArrayList instead of LinkedList. Especially because 
this code here uses the {{List#get()}} method, which is very slow for the 
LinkedList since it has to traverse every node to get to the offset: O(N^2)

{code}
ExtendedBlock getNextBlock(List poolIters,
   DiskBalancerWorkItem item) {
  Preconditions.checkNotNull(poolIters);
  int currentCount = 0;
  ExtendedBlock block = null;
  while (block == null && currentCount < poolIters.size()) {
currentCount++;
int index = poolIndex++ % poolIters.size();
FsVolumeSpi.BlockIterator currentPoolIter = poolIters.get(index);
block = getBlockToCopy(currentPoolIter, item);
  }
{code}
* Do not "log and throw" errors.  This is an anti-pattern and should be 
avoided. Log or throw, but don't do both.  Removed some logging
* Improved other logging statements
* Improved the {{hashcode}} method of one of the inner classes.  It was 
instantiating a List and performing iteration for every call.  Replace with 
Eclipse-generated hashcode method.
* Fixed compiler warnings for deprecated code or code that was not parameterized
* Fix check style issue
Reporter: BELUGA BEHR
 Attachments: HDFS-13978.1.patch





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13979) Review StateStoreFileSystemImpl Class

2018-10-09 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13979:
--

 Summary: Review StateStoreFileSystemImpl Class
 Key: HDFS-13979
 URL: https://issues.apache.org/jira/browse/HDFS-13979
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation
Affects Versions: 3.2.0
Reporter: BELUGA BEHR


* Replace {{LinkedList}} with pre-sized {{ArrayList}}
* Added exception information to WARN/ERROR logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13980) Review of DBNameNodeConnector Class

2018-10-09 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13980:
--

 Summary: Review of DBNameNodeConnector Class
 Key: HDFS-13980
 URL: https://issues.apache.org/jira/browse/HDFS-13980
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: diskbalancer
Affects Versions: 3.2.0
 Environment: * Removed anti-pattern "log and throw". Log or throw, 
don't do both
* Use pre-sized {{ArrayList}} instead of {{LinkedList}}
* Fixed some formatting issues
* Re-ordered imports into correct order
Reporter: BELUGA BEHR
 Attachments: HDFS-13980.1.patch





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13981) Review of AvailableSpaceResolver.java

2018-10-09 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13981:
--

 Summary: Review of AvailableSpaceResolver.java
 Key: HDFS-13981
 URL: https://issues.apache.org/jira/browse/HDFS-13981
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation
Affects Versions: 3.2.0
Reporter: BELUGA BEHR
 Attachments: HDFS-13981.1.patch

* No behavior changes, just optimizing and paring down the code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-604) Correct Ozone getOzoneConf description

2018-10-09 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-604:
---

 Summary: Correct Ozone getOzoneConf description 
 Key: HDDS-604
 URL: https://issues.apache.org/jira/browse/HDDS-604
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru


The {{./ozone getozoneconf}} subcommand description mentions the subcommand as 
{{getconf}}. We should consistently call it either {{getozoneconf}} or 
{{getconf}} at both places.
{code:java}
$ bin/ozone getozoneconf
ozone getconf is utility for getting configuration information from the config 
file.

ozone getconf
[-includeFile]  gets the include file path that defines 
the datanodes that can join the cluster.
[-excludeFile]  gets the exclude file path that defines 
the datanodes that need to decommissioned.
[-ozonemanagers]gets list of Ozone Manager 
nodes in the cluster
[-storagecontainermanagers] gets list of ozone 
storage container manager nodes in the cluster
[-confKey [key]]gets a specific key from the 
configuration
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-605) TestOzoneConfigurationFields fails on Trunk

2018-10-09 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-605:
-

 Summary: TestOzoneConfigurationFields fails on Trunk
 Key: HDDS-605
 URL: https://issues.apache.org/jira/browse/HDDS-605
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Anu Engineer


HDDS-354 removed the following keys from code. 
 * {{"hdds.lock.suppress.warning.interval.ms";}}
 * {{"hdds.lock.suppress.warning.interval.ms";}}

{{We need to remove the same from ozone-default.xml. Lines 1108 - 1129 needs to 
be removed for this test to pass.}}

 

{{cc: [~hanishakoneru], [~bharatviswa]}}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-605) TestOzoneConfigurationFields fails on Trunk

2018-10-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-605.
-
Resolution: Duplicate

> TestOzoneConfigurationFields fails on Trunk
> ---
>
> Key: HDDS-605
> URL: https://issues.apache.org/jira/browse/HDDS-605
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Priority: Major
>  Labels: newbie
>
> HDDS-354 removed the following keys from code. 
>  * {{"hdds.lock.suppress.warning.interval.ms";}}
>  * {{"hdds.lock.suppress.warning.interval.ms";}}
> {{We need to remove the same from ozone-default.xml. Lines 1108 - 1129 needs 
> to be removed for this test to pass.}}
>  
> {{cc: [~hanishakoneru], [~bharatviswa]}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-606) Create delete s3Bucket

2018-10-09 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-606:
---

 Summary: Create delete s3Bucket
 Key: HDDS-606
 URL: https://issues.apache.org/jira/browse/HDDS-606
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


We should have a new API to delete buckets created via S3.

As this delete should actually delete the bucket from bucket table and also 
mapping for S3Table in ozone manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-607) Support S3 testing via MiniOzoneCluster

2018-10-09 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-607:
-

 Summary: Support S3 testing via MiniOzoneCluster
 Key: HDDS-607
 URL: https://issues.apache.org/jira/browse/HDDS-607
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


To write normal unit tests we need support of S3Gateway along with 
MiniOzoneCluster. This Jira proposes to add that. This will allow us to write 
simple unit tests using AWS S3 Java SDK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-608) Mapreduce example fails with Access denied for user hdfs. Superuser privilege is required

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-608:
-

 Summary: Mapreduce example fails with Access denied for user hdfs. 
Superuser privilege is required
 Key: HDDS-608
 URL: https://issues.apache.org/jira/browse/HDDS-608
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


Right now only the administrators can submit a MR job. All the other users 
including hdfs will fail with below error:
{code:java}
-bash-4.2$ ./ozone sh bucket create /volume2/bucket2
2018-10-09 23:03:46,399 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-10-09 23:03:47,473 INFO rpc.RpcClient: Creating Bucket: volume2/bucket2, 
with Versioning false and Storage Type set to DISK
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job
18/10/09 23:04:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 23:04:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:04:10 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0003
18/10/09 23:04:11 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:04:11 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:04:11 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:04:11 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539125785626_0003/
18/10/09 23:04:12 INFO mapreduce.Job: Running job: job_1539125785626_0003
18/10/09 23:04:22 INFO mapreduce.Job: Job job_1539125785626_0003 running in 
uber mode : false
18/10/09 23:04:22 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:04:30 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:04:36 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_00_0, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 
org.apach

[jira] [Created] (HDDS-609) Mapreduce example fails with Allocate block failed, error:INTERNAL_ERROR

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-609:
-

 Summary: Mapreduce example fails with Allocate block failed, 
error:INTERNAL_ERROR
 Key: HDDS-609
 URL: https://issues.apache.org/jira/browse/HDDS-609
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job5
18/10/09 23:37:07 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:08 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197:10200
18/10/09 23:37:08 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:37:09 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0007
18/10/09 23:37:09 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:37:09 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:37:09 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0007
18/10/09 23:37:09 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:37:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:10 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:37:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:37:10 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0007
18/10/09 23:37:10 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-05.hwx.site:8088/proxy/application_1539125785626_0007/
18/10/09 23:37:10 INFO mapreduce.Job: Running job: job_1539125785626_0007
18/10/09 23:37:17 INFO mapreduce.Job: Job job_1539125785626_0007 running in 
uber mode : false
18/10/09 23:37:17 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:37:24 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:37:29 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0007_r_00_0, Status : FAILED
Error: java.io.IOException: Allocate block failed, error:INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:576)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:475)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:37:35 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0007_r_00_1, Status : FAILED
Error: java.io.IOException: Allocate block failed, error:INTERNAL_ERROR
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.allocateBlock(OzoneManagerProtocolClientSideTranslatorPB.java:576)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.a

[jira] [Created] (HDDS-610) On restart of SCM it fails to register DataNodes

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-610:
-

 Summary: On restart of SCM it fails to register DataNodes
 Key: HDDS-610
 URL: https://issues.apache.org/jira/browse/HDDS-610
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
2018-10-09 23:34:11,105 INFO 
org.apache.hadoop.hdds.scm.server.StorageContainerManager: STARTUP_MSG:
/
STARTUP_MSG: Starting StorageContainerManager
STARTUP_MSG: host = 
ctr-e138-1518143905142-510793-01-04.hwx.site/172.27.79.197
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.3.0-SNAPSHOT
STARTUP_MSG: classpath = 
/tmp/ozone-0.3.0-SNAPSHOT/etc/hadoop:/tmp/ozone-0.3.0-SNAPSHOT/share/hadoop/common/*:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/kerb-simplekdc-1.0.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jackson-core-2.9.5.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/protobuf-java-2.5.0.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/netty-3.10.5.Final.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/commons-beanutils-1.9.3.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/okio-1.6.0.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/kerb-core-1.0.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jetty-security-9.3.19.v20170502.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/bcpkix-jdk15on-1.54.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/ratis-client-0.3.0-eca3531-SNAPSHOT.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jsr305-3.0.0.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/leveldbjni-all-1.8.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/j2objc-annotations-1.3.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jctools-core-2.1.2.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/auto-common-0.8.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/ratis-common-0.3.0-eca3531-SNAPSHOT.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jzlib-1.1.3.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/htrace-core4-4.1.0-incubating.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/auto-value-1.6.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/kerb-util-1.0.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/hadoop-hdds-server-framework-0.3.0-SNAPSHOT.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jetty-servlet-9.3.19.v20170502.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jackson-jaxrs-1.9.13.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/kerby-xdr-1.0.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/hadoop-hdds-client-0.3.0-SNAPSHOT.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jackson-core-asl-1.9.13.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/lz4-1.3.0.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/commons-net-3.6.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/ratis-netty-0.3.0-eca3531-SNAPSHOT.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/log4j-api-2.11.0.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/netty-all-4.0.52.Final.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/hamcrest-all-1.3.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/error_prone_annotations-2.2.0.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/slf4j-log4j12-1.7.25.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/kerby-config-1.0.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/auto-service-1.0-rc4.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/metrics-core-3.2.4.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/kerby-pkix-1.0.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/xz-1.0.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/rocksdbjni-5.14.2.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/nimbus-jose-jwt-4.41.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/hadoop-hdds-container-service-0.3.0-SNAPSHOT.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/hadoop-auth-3.3.0-SNAPSHOT.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jsr311-api-1.1.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jackson-annotations-2.9.5.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/javax.servlet-api-3.1.0.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/kerb-server-1.0.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/zookeeper-3.4.13.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/avro-1.7.7.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/kerb-client-1.0.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jackson-xc-1.9.13.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/disruptor-3.4.2.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/guava-11.0.2.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/curator-framework-2.12.0.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/jackson-databind-2.9.5.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/dnsjava-2.1.7.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/hadoop-common-3.3.0-SNAPSHOT.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/commons-configuration2-2.1.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/json-smart-2.3.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/token-provider-1.0.1.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/log4j-1.2.17.jar:/tmp/ozone-0.3.0-SNAPSHOT/share/ozone/lib/commons-da

[jira] [Created] (HDDS-611) SCM UI is not reflecting the changes done in ozone-site.xml

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-611:
-

 Summary: SCM UI is not reflecting the changes done in 
ozone-site.xml
 Key: HDDS-611
 URL: https://issues.apache.org/jira/browse/HDDS-611
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari
 Attachments: Screen Shot 2018-10-09 at 4.49.58 PM.png

ozone-site.xml was updated to change hdds.scm.chillmode.enabled to false. This 
is reflected properly as below:
{code:java}
[root@ctr-e138-1518143905142-510793-01-04 bin]# ./ozone getozoneconf 
-confKey hdds.scm.chillmode.enabled
2018-10-09 23:52:12,621 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
false
{code}
But the SCM UI does not reflect this change and it still shows the old value of 
true. Please see attached screenshot. !Screen Shot 2018-10-09 at 4.49.58 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-612) Even after setting hdds.scm.chillmode.enabled to false, SCM allocateblock fails with ChillModePrecheck exception

2018-10-09 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-612:
-

 Summary: Even after setting hdds.scm.chillmode.enabled to false, 
SCM allocateblock fails with ChillModePrecheck exception
 Key: HDDS-612
 URL: https://issues.apache.org/jira/browse/HDDS-612
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
2018-10-09 23:11:58,047 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 
on 9863, call Call#70 Retry#0 
org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 
172.27.56.9:53442
org.apache.hadoop.hdds.scm.exceptions.SCMException: ChillModePrecheck failed 
for allocateBlock
at 
org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:38)
at 
org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:30)
at org.apache.hadoop.hdds.scm.ScmUtils.preCheck(ScmUtils.java:42)
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:191)
at 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
at 
org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6255)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-613) Update HeadBucket, DeleteBucket to not to have volume in path

2018-10-09 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-613:
---

 Summary: Update  HeadBucket, DeleteBucket to not to have volume in 
path
 Key: HDDS-613
 URL: https://issues.apache.org/jira/browse/HDDS-613
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Update these API requests not to have volume in their path param.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13982) convertStorageType() in PBHelperClient is not easy to extend when adding new storage types

2018-10-09 Thread Xiang Li (JIRA)
Xiang Li created HDFS-13982:
---

 Summary: convertStorageType() in PBHelperClient is not easy to 
extend when adding new storage types
 Key: HDFS-13982
 URL: https://issues.apache.org/jira/browse/HDFS-13982
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Xiang Li






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org