[jira] [Created] (HDDS-995) Use final hadoop 3.2.0 dependency in Ozone

2019-01-23 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-995:
-

 Summary: Use final hadoop 3.2.0 dependency in Ozone
 Key: HDDS-995
 URL: https://issues.apache.org/jira/browse/HDDS-995
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Supratim Deka


As of now Ozone uses the (off-tree) 3.2.1-SNAPSHOT hadoop dependency. Mainly 
beacause there was no recent hadoop release which includes all the required 
improvements for Ozone.

Finally hadoop 3.2.0 is uploaded to the Apache Nexus repository and we can 
switch to use the latest release artifacts instead of snapshots.

To use 3.2.0 hadoop-ozone/pom.xml and hadoop-hdds/pom.xml should be updated to 
use 3.2.0 version of the parent.

The build can be tested with

./hadoop-ozone/dev-support/checks/build.sh

./hadoop-ozone/dev-support/checks/acceptance.sh



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-995) Use final hadoop 3.2.0 dependency in Ozone

2019-01-23 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton resolved HDDS-995.
---
Resolution: Duplicate

Duplicate of HDDS-993

> Use final hadoop 3.2.0 dependency in Ozone
> --
>
> Key: HDDS-995
> URL: https://issues.apache.org/jira/browse/HDDS-995
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Supratim Deka
>Priority: Major
>
> As of now Ozone uses the (off-tree) 3.2.1-SNAPSHOT hadoop dependency. Mainly 
> beacause there was no recent hadoop release which includes all the required 
> improvements for Ozone.
> Finally hadoop 3.2.0 is uploaded to the Apache Nexus repository and we can 
> switch to use the latest release artifacts instead of snapshots.
> To use 3.2.0 hadoop-ozone/pom.xml and hadoop-hdds/pom.xml should be updated 
> to use 3.2.0 version of the parent.
> The build can be tested with
> ./hadoop-ozone/dev-support/checks/build.sh
> ./hadoop-ozone/dev-support/checks/acceptance.sh



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-996) Incorrect data length gets updated in OM by client in case it hits exception in multiple successive block writes

2019-01-23 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-996:


 Summary: Incorrect data length gets updated in OM by client in 
case it hits exception in multiple successive block writes
 Key: HDDS-996
 URL: https://issues.apache.org/jira/browse/HDDS-996
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


In the retry path, the data which needs to be written to the next block should 
always be calculated from the data actually residing in the buffer list rather 
than the length of the current stream entry allocated. This leads to updating 
incorrect length of key updated in OM when multiple exceptions occur while 
doing key writes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-23 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-997:
---

 Summary: Add blockade Tests for scm isolation and mixed node 
isolation
 Key: HDDS-997
 URL: https://issues.apache.org/jira/browse/HDDS-997
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/

[Jan 22, 2019 2:23:32 AM] (wwei) YARN-9210. RM nodes web page can not display 
node info. Contributed by
[Jan 22, 2019 4:01:08 AM] (wwei) HDFS-14207. ZKFC should catch exception when 
ha configuration missing.
[Jan 22, 2019 4:44:37 AM] (arp) HDFS-14221. Replace Guava Optional with Java 
Optional. Contributed by
[Jan 22, 2019 4:45:09 AM] (arp) HDFS-14222. Make ThrottledAsyncChecker 
constructor public. Contributed
[Jan 22, 2019 4:40:42 PM] (stevel) HADOOP-16048. ABFS: Fix Date format parser.
[Jan 22, 2019 5:27:17 PM] (bharat) HDDS-913. Ozonefs defaultFs example is wrong 
in the documentation.
[Jan 22, 2019 9:37:05 PM] (bharat) HDDS-992. ozone-default.xml has invalid text 
from a stale merge.
[Jan 22, 2019 11:24:43 PM] (eyang) YARN-9146.  Added REST API to configure 
auxiliary service.
[Jan 22, 2019 11:59:36 PM] (eyang) HADOOP-15922. Fixed 
DelegationTokenAuthenticator URL decoding for doAs
[Jan 23, 2019 1:03:06 AM] (tasanuma) HDFS-14218. EC: Ls -e throw NPE when 
directory ec policy is disabled.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1025/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/j

[jira] [Created] (HDDS-998) Remove XceiverClinetSpi Interface in Ozone

2019-01-23 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-998:


 Summary: Remove XceiverClinetSpi Interface in Ozone
 Key: HDDS-998
 URL: https://issues.apache.org/jira/browse/HDDS-998
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


Currently, Ratis client and StandAlone client implement the XceiverClientSpi 
interface. Functionally, XceiverClientRatis supports and requires functionality 
to handle and update commited log Info of Ratis Servers and same needs to be 
propagated and handled in Ozone clinet, For the standAlone client, there is no 
notion of Raft log Index and these functionality make no sense. As Standalone 
client and Ratis client have diverged functionally, it makes no sense to keep a 
common interface for the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-999) Make the DNS resolution in OzoneManager more resilient

2019-01-23 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-999:
-

 Summary: Make the DNS resolution in OzoneManager more resilient
 Key: HDDS-999
 URL: https://issues.apache.org/jira/browse/HDDS-999
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Elek, Marton


If the OzoneManager is started before scm the scm dns may not be available. In 
this case the om should retry and re-resolve the dns, but as of now it throws 
an exception:
{code:java}
2019-01-23 17:14:25 ERROR OzoneManager:593 - Failed to start the OzoneManager.
java.net.SocketException: Call From om-0.om to null:0 failed on socket 
exception: java.net.SocketException: Unresolved address; For more details see:  
http://wiki.apache.org/hadoop/SocketException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:798)
    at org.apache.hadoop.ipc.Server.bind(Server.java:566)
    at org.apache.hadoop.ipc.Server$Listener.(Server.java:1042)
    at org.apache.hadoop.ipc.Server.(Server.java:2815)
    at org.apache.hadoop.ipc.RPC$Server.(RPC.java:994)
    at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:421)
    at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
    at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:804)
    at 
org.apache.hadoop.ozone.om.OzoneManager.startRpcServer(OzoneManager.java:563)
    at 
org.apache.hadoop.ozone.om.OzoneManager.getRpcServer(OzoneManager.java:927)
    at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:265)
    at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:674)
    at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:587)
Caused by: java.net.SocketException: Unresolved address
    at sun.nio.ch.Net.translateToSocketException(Net.java:131)
    at sun.nio.ch.Net.translateException(Net.java:157)
    at sun.nio.ch.Net.translateException(Net.java:163)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)
    at org.apache.hadoop.ipc.Server.bind(Server.java:549)
    ... 11 more
Caused by: java.nio.channels.UnresolvedAddressException
    at sun.nio.ch.Net.checkAddress(Net.java:101)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    ... 12 more{code}
It should be fixed. (See also HDDS-421 which fixed the same problem in datanode 
side and HDDS-907 which is the workaround while this issue is not resolved).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1000) Write a tool to dump DataNode RocksDB in human-readable format

2019-01-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1000:
---

 Summary: Write a tool to dump DataNode RocksDB in human-readable 
format
 Key: HDDS-1000
 URL: https://issues.apache.org/jira/browse/HDDS-1000
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


It would be good to have a command-line tool that can dump the contents of a 
DataNode RocksDB file in human-readable format e.g. YAML.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1001) Switch Hadoop version to 3.2.0

2019-01-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1001:
---

 Summary: Switch Hadoop version to 3.2.0
 Key: HDDS-1001
 URL: https://issues.apache.org/jira/browse/HDDS-1001
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


Now that Hadoop 3.2.0 is released, we should be able to switch the 
hadoop.version from 3.2.1-SNAPSHOT to 3.2.0.

We should run all unit/acceptance tests manually once after making the change. 
Not sure Jenkins will do that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1001) Switch Hadoop version to 3.2.0

2019-01-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-1001.
-
Resolution: Duplicate

Whoops, [~sdeka] just pointed out this dups HDDS-993.

> Switch Hadoop version to 3.2.0
> --
>
> Key: HDDS-1001
> URL: https://issues.apache.org/jira/browse/HDDS-1001
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Supratim Deka
>Priority: Major
>
> Now that Hadoop 3.2.0 is released, we should be able to switch the 
> hadoop.version from 3.2.1-SNAPSHOT to 3.2.0.
> We should run all unit/acceptance tests manually once after making the 
> change. Not sure Jenkins will do that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-01-23 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-14225:
-

 Summary: RBF : MiniRouterDFSCluster should configure the failover 
proxy provider for namespace
 Key: HDFS-14225
 URL: https://issues.apache.org/jira/browse/HDFS-14225
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: federation
Affects Versions: 3.1.1
Reporter: Surendra Singh Lilhore


Getting UnknownHostException in UT.

{noformat}
org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
java.net.UnknownHostException: ns0
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14226) RBF: setErasureCodingPolicy should set all multiple subclusters' directories.

2019-01-23 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-14226:
---

 Summary: RBF: setErasureCodingPolicy should set all multiple 
subclusters' directories.
 Key: HDFS-14226
 URL: https://issues.apache.org/jira/browse/HDFS-14226
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma


Only one subcluster is set now.

{noformat}
// create a mount point of multiple subclusters
hdfs dfsrouteradmin -add /all_data ns1 /data1
hdfs dfsrouteradmin -add /all_data ns2 /data2

hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
RS-3-2-1024k
Set RS-3-2-1024k erasure coding policy on /all_data

hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
RS-3-2-1024k

hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
RS-3-2-1024k

hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
The erasure coding policy of /data2 is unspecified
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org