[jira] [Created] (HDFS-13896) RBF Web UI not displaying clearly which target path is pointing to which name service in mount table

2018-09-05 Thread venkata ram kumar ch (JIRA)
venkata ram kumar ch created HDFS-13896:
---

 Summary: RBF Web UI not displaying clearly which target path is 
pointing to which name service in mount table 
 Key: HDFS-13896
 URL: https://issues.apache.org/jira/browse/HDFS-13896
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: venkata ram kumar ch
Assignee: venkata ram kumar ch


Commands :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -rm /apps
18/09/05 12:31:36 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Successfully removed mount point /apps

Commands:/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -add /apps hacluster1 /opt
18/09/05 12:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
INFO: Watching file:/opt/hadoopclient/HDFS/hadoop/etc/hadoop/log4j.properties 
for changes with interval : 6
Successfully added mount point /apps

Command :/HDFS/hadoop/bin> ./hdfs dfsrouteradmin -ls
18/09/05 12:34:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Mount Table Entries:
Source Destinations Owner Group Mode Quota/Usage
/apps hacluster1->/opt,hacluster2->/opt1 securedn users rwxr-xr-x [NsQuota: 
-/-, SsQuota: -/-]

WebUI : 
h1. Mount Table
||Global path||Target nameservice||Target path||Order||Read 
only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
|/apps|hacluster1,hacluster2|/opt,/opt1|HASH| 
|securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: -/-]|2018/09/05 
15:02:54|2018/09/05 15:02:25|

 

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-403) infoKey shows wrong "createdOn", "modifiedOn" metadata for key

2018-09-05 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-403:
---

 Summary: infoKey shows wrong "createdOn", "modifiedOn" metadata 
for key
 Key: HDDS-403
 URL: https://issues.apache.org/jira/browse/HDDS-403
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


1. ran putKey command for a file
{noformat}
[root@ctr-e138-1518143905142-459606-01-03 bin]# ./ozone oz -putKey 
/test-vol1/test-bucket1/file1 -file /etc/passwd -v
2018-09-05 10:25:11,498 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Volume Name : test-vol1
Bucket Name : test-bucket1
Key Name : file1
File Hash : 8164cc3d5b05c44b73a6277661aa4645
2018-09-05 10:25:12,377 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
2018-09-05 10:25:12,390 INFO conf.ConfUtils: raft.grpc.message.size.max = 
33554432 (custom)
2018-09-05 10:25:12,402 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
300 ms (default)
2018-09-05 10:25:12,407 INFO conf.ConfUtils: 
raft.client.async.outstanding-requests.max = 100 (default)
2018-09-05 10:25:12,407 INFO conf.ConfUtils: 
raft.client.async.scheduler-threads = 3 (default)
2018-09-05 10:25:12,518 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
1MB (=1048576) (default)
2018-09-05 10:25:12,518 INFO conf.ConfUtils: raft.grpc.message.size.max = 
33554432 (custom)
2018-09-05 10:25:12,866 INFO conf.ConfUtils: raft.client.rpc.request.timeout = 
3000 ms (default)
2018-09-05 10:25:13,644 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
1MB (=1048576) (default)
2018-09-05 10:25:13,644 INFO conf.ConfUtils: raft.grpc.message.size.max = 
33554432 (custom)
2018-09-05 10:25:13,645 INFO conf.ConfUtils: raft.client.rpc.request.timeout = 
3000 ms (default)
[root@ctr-e138-1518143905142-459606-01-03 bin]# ./ozone oz -getKey 
/test-vol1/test-bucket1/file1 -file getkey3
2018-09-05 10:25:22,020 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-09-05 10:25:22,778 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
2018-09-05 10:25:22,790 INFO conf.ConfUtils: raft.grpc.message.size.max = 
33554432 (custom)
2018-09-05 10:25:22,800 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
300 ms (default)
2018-09-05 10:25:22,804 INFO conf.ConfUtils: 
raft.client.async.outstanding-requests.max = 100 (default)
2018-09-05 10:25:22,805 INFO conf.ConfUtils: 
raft.client.async.scheduler-threads = 3 (default)
2018-09-05 10:25:22,890 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
1MB (=1048576) (default)
2018-09-05 10:25:22,890 INFO conf.ConfUtils: raft.grpc.message.size.max = 
33554432 (custom)
2018-09-05 10:25:23,250 INFO conf.ConfUtils: raft.client.rpc.request.timeout = 
3000 ms (default)
2018-09-05 10:25:24,066 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
1MB (=1048576) (default)
2018-09-05 10:25:24,067 INFO conf.ConfUtils: raft.grpc.message.size.max = 
33554432 (custom)
2018-09-05 10:25:24,067 INFO conf.ConfUtils: raft.client.rpc.request.timeout = 
3000 ms (default){noformat}
2. Ran infoKey on that key
{noformat}
[root@ctr-e138-1518143905142-459606-01-03 bin]# ./ozone oz -infoKey 
/test-vol1/test-bucket1/file1 -v
2018-09-05 10:54:42,053 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Volume Name : test-vol1
Bucket Name : test-bucket1
Key Name : file1
{
 "version" : 0,
 "md5hash" : null,
 "createdOn" : "Sat, 14 Dec +114522267 00:51:17 GMT",
 "modifiedOn" : "Fri, 09 Jun +50648 04:30:12 GMT",
 "size" : 4659,
 "keyName" : "file1",
 "keyLocations" : [ {
 "containerID" : 16,
 "localID" : 1536143112267,
 "length" : 4659,
 "offset" : 0
 } ]
}{noformat}
"createdOn" and "modifiedOn" metadata are incorrect.

Here is the current date:
{noformat}
[root@ctr-e138-1518143905142-459606-01-03 bin]# date
Wed Sep 5 10:54:52 UTC 2018{noformat}
Also , the "md5hash" for the key is showing as null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [Vote] Merge discussion for Node attribute support feature YARN-3409

2018-09-05 Thread Naganarasimha Garla
Hi All,
 Thanks for feedback folks, based on the positive response starting
a Vote thread for merging YARN-3409 to master.

Regards,
+ Naga & Sunil

On Wed, 5 Sep 2018 2:51 am Wangda Tan,  wrote:

> +1 for the merge, it gonna be a great addition to 3.2.0 release. Thanks to
> everybody for pushing this feature to complete.
>
> Best,
> Wangda
>
> On Tue, Sep 4, 2018 at 8:25 AM Bibinchundatt 
> wrote:
>
>> +1 for merge. Fetaure would be a good addition to 3.2 release.
>>
>> --
>> Bibin A Chundatt
>> M: +91-9742095715
>> E: bibin.chund...@huawei.com
>> 2012实验室-印研IT&Cloud BU分部
>> 2012 Laboratories-IT&Cloud BU Branch Dept.
>> From:Naganarasimha Garla
>> To:common-...@hadoop.apache.org,Hdfs-dev,yarn-...@hadoop.apache.org,
>> mapreduce-...@hadoop.apache.org,
>> Date:2018-08-29 20:00:44
>> Subject:[Discuss] Merge discussion for Node attribute support feature
>> YARN-3409
>>
>> Hi All,
>>
>> We would like to hear your thoughts on merging “Node Attributes Support in
>> YARN” branch (YARN-3409) [2] into trunk in a few weeks. The goal is to get
>> it in for HADOOP 3.2.
>>
>> *Major work happened in this branch*
>>
>> YARN-6858. Attribute Manager to store and provide node attributes in RM
>> YARN-7871. Support Node attributes reporting from NM to RM( distributed
>> node attributes)
>> YARN-7863. Modify placement constraints to support node attributes
>> YARN-7875. Node Attribute store for storing and recovering attributes
>>
>> *Detailed Design:*
>>
>> Please refer [1] for detailed design document.
>>
>> *Testing Efforts:*
>>
>> We did detailed tests for the feature in the last few weeks.
>> This feature will be enabled only when Node Attributes constraints are
>> specified through SchedulingRequest from AM.
>> Manager implementation will help to store and recover Node Attributes.
>> This
>> works with existing placement constraints.
>>
>> *Regarding to API stability:*
>>
>> All newly added @Public APIs are @Unstable.
>>
>> Documentation jira [3] could help to provide detailed configuration
>> details. This feature works from end-to-end and we tested this in our
>> local
>> cluster. Branch code is run against trunk and tracked via [4].
>>
>> We would love to get your thoughts before opening a voting thread.
>>
>> Special thanks to a team of folks who worked hard and contributed towards
>> this efforts including design discussion / patch / reviews, etc.: Weiwei
>> Yang, Bibin Chundatt, Wangda Tan, Vinod Kumar Vavilappali, Konstantinos
>> Karanasos, Arun Suresh, Varun Saxena, Devaraj Kavali, Lei Guo, Chong Chen.
>>
>> [1] :
>>
>> https://issues.apache.org/jira/secure/attachment/12937633/Node-Attributes-Requirements-Design-doc_v2.pdf
>> [2] : https://issues.apache.org/jira/browse/YARN-3409
>> [3] : https://issues.apache.org/jira/browse/YARN-7865
>> [4] : https://issues.apache.org/jira/browse/YARN-8718
>>
>> Thanks,
>> + Naga & Sunil Govindan
>>
>


[jira] [Reopened] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2018-09-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-9059:
-
  Assignee: (was: Jagadesh Kiran N)

> Expose lssnapshottabledir via WebHDFS
> -
>
> Key: HDFS-9059
> URL: https://issues.apache.org/jira/browse/HDFS-9059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>  Labels: newbie
>
> lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-404) Implement toString() in OmKeyLocationInfo

2018-09-05 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-404:
--

 Summary: Implement toString() in OmKeyLocationInfo
 Key: HDDS-404
 URL: https://issues.apache.org/jira/browse/HDDS-404
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia
 Fix For: 0.2.1


OmKeyLocationInfo does not have an overridden toString().

As such, the information is not captured appropriately in Audit Logging.

Example:

{{2018-09-06 01:57:22,950 | INFO  | OMAudit | user=hadoop | ip=172.18.0.4 | 
op=COMMIT_KEY | \{volume=vol-0-16241, bucket=bucket-0-61479, key=key-257-04655, 
dataSize=10240, replicationType=null, replicationFactor=null, 
keyLocationInfo=[org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo@5fb8c959],
 clientID=61945301411} | ret=SUCCESS |}}

 

This Jira proposes to implement the required toString() so that audit log 
contains appropriate data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13897) DiskBalancer: for invalid configurations print WARN message in console output while executing the Diskbalaner commands.

2018-09-05 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13897:


 Summary: DiskBalancer: for invalid configurations print WARN 
message in console output while executing the Diskbalaner commands.
 Key: HDFS-13897
 URL: https://issues.apache.org/jira/browse/HDFS-13897
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: diskbalancer
Reporter: Harshakiran Reddy


{{Scenario:-}}

1. configure the invalid value for any disk balancer configurations and restart 
the Datanode
2. Run the disk balancer commands

{{Actual output:-}}

it's continue with default configurations

{{Excepted output:-}} 

it will print WARN message in console like *configured invalid value and taking 
the default value for this configuration* and that time user/customer knows our 
configurations are not effected to current disk balancer otherwise he will 
think it taking their configurations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13898) Throw retriable exception for getBlockLocations when ObserverNameNode is in safemode

2018-09-05 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13898:
---

 Summary: Throw retriable exception for getBlockLocations when 
ObserverNameNode is in safemode
 Key: HDFS-13898
 URL: https://issues.apache.org/jira/browse/HDFS-13898
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chao Sun


When ObserverNameNode is in safe mode, {{getBlockLocations}} may throw safe 
mode exception if the given file doesn't have any block yet. 

{code}
try {
  checkOperation(OperationCategory.READ);
  res = FSDirStatAndListingOp.getBlockLocations(
  dir, pc, srcArg, offset, length, true);
  if (isInSafeMode()) {
for (LocatedBlock b : res.blocks.getLocatedBlocks()) {
  // if safemode & no block locations yet then throw safemodeException
  if ((b.getLocations() == null) || (b.getLocations().length == 0)) {
SafeModeException se = newSafemodeException(
"Zero blocklocations for " + srcArg);
if (haEnabled && haContext != null &&
haContext.getState().getServiceState() == 
HAServiceState.ACTIVE) {
  throw new RetriableException(se);
} else {
  throw se;
}
  }
}
  }
{code}

It only throws {{RetriableException}} for active NN so requests on observer may 
just fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org