[jira] [Created] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-01-10 Thread Wang, Xinglong (JIRA)
Wang, Xinglong created HDFS-14195:
-

 Summary: OIV: print out storage policy id in oiv Delimited output
 Key: HDFS-14195
 URL: https://issues.apache.org/jira/browse/HDFS-14195
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Wang, Xinglong


There is lacking of a method to get all specified folders and files with sort 
of specified storage policy via command line, like ALL_SSD type.

By adding storage policy id to oiv output, it will help with oiv post-analysis 
to have a overview of all folders/files with specified storage policy and to 
apply internal regulation based on this information.

 

Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14196) ArrayIndexOutOfBoundsException in JN metrics makes JN out of sync

2019-01-10 Thread Ranith Sardar (JIRA)
Ranith Sardar created HDFS-14196:


 Summary: ArrayIndexOutOfBoundsException in JN metrics makes JN out 
of sync
 Key: HDFS-14196
 URL: https://issues.apache.org/jira/browse/HDFS-14196
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ranith Sardar
Assignee: Ranith Sardar


{{2018-11-26 21:55:39,100 | WARN | IPC Server handler 4 on 25012 | IPC Server 
handler 4 on 25012, call 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.journal from 
192.100.2.4:41622 Call#785140293 Retry#0 | Server.java:2334 
java.lang.ArrayIndexOutOfBoundsException: 500 at 
org.apache.hadoop.metrics2.util.SampleQuantiles.insert(SampleQuantiles.java:114)
 at 
org.apache.hadoop.metrics2.lib.MutableQuantiles.add(MutableQuantiles.java:130) 
at 
org.apache.hadoop.hdfs.qjournal.server.JournalMetrics.addSync(JournalMetrics.java:120)
 at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:400) at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:153)
 at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
 at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:2542}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-971) ContainerDataConstructor throws exception on QUASI_CLOSED and UNHEALTHY container state

2019-01-10 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-971:


 Summary: ContainerDataConstructor throws exception on QUASI_CLOSED 
and UNHEALTHY container state
 Key: HDDS-971
 URL: https://issues.apache.org/jira/browse/HDDS-971
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain


We need to define QUASI_CLOSED and UNHEALTHY state in ContainerDataConstructor 
class logic. Currently the code uses switch-case for determining the state. The 
logic can be replaced with 
{code:java}
ContainerProtos.ContainerDataProto.State.valueOf(state)
{code}
call. The Jira also fixes test failure TestKeys#testPutAndGetKeyWithDnRestart.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14197) Invalid exit codes for invalid fsck report

2019-01-10 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14197:


 Summary: Invalid exit codes for invalid fsck report 
 Key: HDFS-14197
 URL: https://issues.apache.org/jira/browse/HDFS-14197
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy



{noformat}
bin> ./hdfs fsck /harsha/file1 -blocks -files -locations
FileSystem is inaccessible due to:
java.io.FileNotFoundException: File does not exist: /harsha/file1
DFSck exiting.
bin> echo $?
0
bin>
{noformat}

Excepted Result : 

It should be 1




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14198) Upload and Create button doesn't get enabled after getting reset.

2019-01-10 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14198:
---

 Summary: Upload and Create button doesn't get enabled after 
getting reset.
 Key: HDFS-14198
 URL: https://issues.apache.org/jira/browse/HDFS-14198
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


In Explorer.html The upload and the create button after an operation gets reset 
but doesn't get enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14159) Backporting HDFS-12882 to branch-3.0: Support full open(PathHandle) contract in HDFS

2019-01-10 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal resolved HDFS-14159.
---
Resolution: Won't Fix

> Backporting HDFS-12882 to branch-3.0: Support full open(PathHandle) contract 
> in HDFS
> 
>
> Key: HDFS-14159
> URL: https://issues.apache.org/jira/browse/HDFS-14159
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.3
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HDFS-12882.branch-3.0.000.patch
>
>
> This task aims to backport HDFS-12882 and some connecting commits to 
> branch-3.0 without introducing API incompatibilities.
> In order to be able to cleanly backport, first HDFS-7878, then HDFS-12877 
> should be backported to that branch as well (both can be executed cleanly, 
> and with build success).
> Also, this patch would also introduce API backward incompatibilities in 
> hadoop-hdfs-client, and we should modify it to a compat change (similar as in 
> HDFS-13830 fought with this problem).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/

[Jan 9, 2019 4:55:25 AM] (tasanuma) HADOOP-15941. [JDK 11] Compilation failure: 
package com.sun.jndi.ldap is
[Jan 9, 2019 5:57:58 PM] (sean) HADOOP-16027. [DOC] Effective use of FS 
instances during S3A integration
[Jan 9, 2019 6:57:24 PM] (hanishakoneru) HDDS-947. Implement OzoneManager State 
Machine.
[Jan 9, 2019 7:20:57 PM] (hanishakoneru) Revert "HDDS-947. Implement 
OzoneManager State Machine."
[Jan 9, 2019 11:24:58 PM] (jlowe) Revert "HDFS-14084. Need for more stats in 
DFSClient. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestReadWriteDiskValidator 
   hadoop.security.ssl.TestSSLFactory 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap 
   hadoop.yarn.client.api.impl.TestAMRMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [184K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [332K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1012/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_had

Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-10 Thread Weiwei Yang
Hi Sunil

I tried to verify site docs, but it seems
"hadoop-site/hadoop-project/index.html" is still the old content. Should
that be updated before creating the RCs?
And another minor thing, looks like it now requires to install
"python-dateutil" before building the docs, we may need to update BUILD.txt
accordingly.

Thanks
--
Weiwei

On Thu, Jan 10, 2019 at 1:03 PM Wilfred Spiegelenburg
 wrote:

> +1 (non binding)
>
> - build from source on MacOSX 10.14.2, 1.8.0u181
> - successful native build on Ubuntu 16.04.3
> - confirmed the checksum and signature
> - deployed a single node cluster  (openjdk 1.8u191 / centos 7.5)
> - uploaded the MR framework
> - configured YARN with the FS
> - ran multiple MR jobs
>
> > On 8 Jan 2019, at 22:42, Sunil G  wrote:
> >
> > Hi folks,
> >
> >
> > Thanks to all of you who helped in this release [1] and for helping to
> vote
> > for RC0. I have created second release candidate (RC1) for Apache Hadoop
> > 3.2.0.
> >
> >
> > Artifacts for this RC are available here:
> >
> > http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
> >
> >
> > RC tag in git is release-3.2.0-RC1.
> >
> >
> >
> > The maven artifacts are available via repository.apache.org at
> > https://repository.apache.org/content/repositories/orgapachehadoop-1178/
> >
> >
> > This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm
> PST.
> >
> >
> >
> > 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
> > additions
> >
> > are the highlights of this release.
> >
> > 1. Node Attributes Support in YARN
> >
> > 2. Hadoop Submarine project for running Deep Learning workloads on YARN
> >
> > 3. Support service upgrade via YARN Service API and CLI
> >
> > 4. HDFS Storage Policy Satisfier
> >
> > 5. Support Windows Azure Storage - Blob file system in Hadoop
> >
> > 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
> >
> > 7. Improvements in Router-based HDFS federation
> >
> >
> >
> > Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
> >
> > I have done few testing with my pseudo cluster. My +1 to start.
> >
> >
> >
> > Regards,
> >
> > Sunil
> >
> >
> >
> > [1]
> >
> >
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
> >
> > [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> > AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> > ORDER BY fixVersion ASC
>
>
> Wilfred Spiegelenburg | Software Engineer
> cloudera.com 
>
>
>
>
>
>
>
>

-- 
Weiwei Yang


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-10 Thread Craig . Condit
+1 (non-binding)

- built from source on CentOS 7.5
- deployed single node cluster
- ran several yarn jobs
- ran multiple docker jobs, including spark-on-docker

On 1/8/19, 5:42 AM, "Sunil G"  wrote:

Hi folks,


Thanks to all of you who helped in this release [1] and for helping to vote
for RC0. I have created second release candidate (RC1) for Apache Hadoop
3.2.0.


Artifacts for this RC are available here:

http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/


RC tag in git is release-3.2.0-RC1.



The maven artifacts are available via repository.apache.org at
https://repository.apache.org/content/repositories/orgapachehadoop-1178/


This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm PST.



3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
additions

are the highlights of this release.

1. Node Attributes Support in YARN

2. Hadoop Submarine project for running Deep Learning workloads on YARN

3. Support service upgrade via YARN Service API and CLI

4. HDFS Storage Policy Satisfier

5. Support Windows Azure Storage - Blob file system in Hadoop

6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a

7. Improvements in Router-based HDFS federation



Thanks to Wangda, Vinod, Marton for helping me in preparing the release.

I have done few testing with my pseudo cluster. My +1 to start.



Regards,

Sunil



[1]


https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E

[2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
ORDER BY fixVersion ASC




[jira] [Created] (HDDS-972) Add support for configuring multiple OMs

2019-01-10 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-972:
---

 Summary: Add support for configuring multiple OMs
 Key: HDDS-972
 URL: https://issues.apache.org/jira/browse/HDDS-972
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru


For OM HA, we would need to run multiple (atleast 3) OM services so that we can 
form a replicated Ratis ring of OMs. This Jira aims to add support for 
configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-973) HDDS/Ozone fail to build on Windows

2019-01-10 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-973:
---

 Summary: HDDS/Ozone fail to build on Windows
 Key: HDDS-973
 URL: https://issues.apache.org/jira/browse/HDDS-973
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Sammi Chen
Assignee: Xiaoyu Yao


Thanks [~Sammi] for reporting the issue on building hdds/ozone with Windows OS. 
I can repro it locally and will post a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-974) Add getServiceAddress method to ServiceInfo and use it in TestOzoneShell

2019-01-10 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-974:
--

 Summary: Add getServiceAddress method to ServiceInfo and use it in 
TestOzoneShell
 Key: HDDS-974
 URL: https://issues.apache.org/jira/browse/HDDS-974
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Dinesh Chitlangia


This jira has been filed based on [~ajayydv]'s [review comment 
|https://issues.apache.org/jira/browse/HDDS-960?focusedCommentId=16739807&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16739807]on
 HDDS-960

1. Add a method getServiceAddress(ServicePort port) in ServiceInfo
2. Use this method in TestOzoneShell in place of following snippet:

{code:java}
String omHostName = services.stream().filter(
a -> a.getNodeType().equals(HddsProtos.NodeType.OM))
.collect(Collectors.toList()).get(0).getHostname();
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-540) Unblock certain SCM client APIs from SCM#checkAdminAccess

2019-01-10 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-540.
-
Resolution: Not A Problem

> Unblock certain SCM client APIs from SCM#checkAdminAccess
> -
>
> Key: HDDS-540
> URL: https://issues.apache.org/jira/browse/HDDS-540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Currently most of SCM Client APIs has been guarded with checkAdminAccess. 
> This ticket is opened to unblock non-admin client from accessing SCM 
> container/pipeline during block allocation. 
>  
> {code}
> scm_1           | 2018-09-22 02:52:32 INFO  Server:2726 - IPC Server handler 
> 5 on 9860, call Call#4 Retry#0 
> org.apache.hadoop.ozone.protocol.StorageContainerLocationProtocol.getContainerWithPipeline
>  from 192.168.0.2:34101
> scm_1           | java.io.IOException: Access denied for user 
> testuser/datan...@example.com. Superuser privilege is required.
> scm_1           | at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:867)
> scm_1           | at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
> scm_1           | at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:120)
> scm_1           | at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:10790)
> scm_1           | at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> scm_1           | at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> scm_1           | at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> scm_1           | at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> scm_1           | at java.security.AccessController.doPrivileged(Native 
> Method)
> scm_1           | at javax.security.auth.Subject.doAs(Subject.java:422)
> scm_1           | at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> scm_1           | at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-10 Thread Kuhu Shukla
+1 (non-binding)

- built from source on Mac
- deployed on a pseudo distributed one node cluster
- ran example jobs like sleep and wordcount.

Thank you for all the work on this release.
Regards,
Kuhu

On Thu, Jan 10, 2019 at 10:32 AM Craig.Condit 
wrote:

> +1 (non-binding)
>
> - built from source on CentOS 7.5
> - deployed single node cluster
> - ran several yarn jobs
> - ran multiple docker jobs, including spark-on-docker
>
> On 1/8/19, 5:42 AM, "Sunil G"  wrote:
>
> Hi folks,
>
>
> Thanks to all of you who helped in this release [1] and for helping to
> vote
> for RC0. I have created second release candidate (RC1) for Apache
> Hadoop
> 3.2.0.
>
>
> Artifacts for this RC are available here:
>
> http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
>
>
> RC tag in git is release-3.2.0-RC1.
>
>
>
> The maven artifacts are available via repository.apache.org at
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1178/
>
>
> This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm
> PST.
>
>
>
> 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
> additions
>
> are the highlights of this release.
>
> 1. Node Attributes Support in YARN
>
> 2. Hadoop Submarine project for running Deep Learning workloads on YARN
>
> 3. Support service upgrade via YARN Service API and CLI
>
> 4. HDFS Storage Policy Satisfier
>
> 5. Support Windows Azure Storage - Blob file system in Hadoop
>
> 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
>
> 7. Improvements in Router-based HDFS federation
>
>
>
> Thanks to Wangda, Vinod, Marton for helping me in preparing the
> release.
>
> I have done few testing with my pseudo cluster. My +1 to start.
>
>
>
> Regards,
>
> Sunil
>
>
>
> [1]
>
>
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
>
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> (3.2.0)
> AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> ORDER BY fixVersion ASC
>
>
>


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-10 Thread Aaron Fabbri
Thanks Sunil and everyone who has worked on this release.

+1 from me.

- Verified checksums for tar file.
- Built from tar.gz.
- Ran through S3A and S3Guard integration tests (in AWS us-west 2).

This includes a yarn minicluster test but is mostly focused on s3a/s3guard.

Cheers,
Aaron


On Thu, Jan 10, 2019 at 2:32 PM Kuhu Shukla 
wrote:

> +1 (non-binding)
>
> - built from source on Mac
> - deployed on a pseudo distributed one node cluster
> - ran example jobs like sleep and wordcount.
>
> Thank you for all the work on this release.
> Regards,
> Kuhu
>
> On Thu, Jan 10, 2019 at 10:32 AM Craig.Condit 
> wrote:
>
> > +1 (non-binding)
> >
> > - built from source on CentOS 7.5
> > - deployed single node cluster
> > - ran several yarn jobs
> > - ran multiple docker jobs, including spark-on-docker
> >
> > On 1/8/19, 5:42 AM, "Sunil G"  wrote:
> >
> > Hi folks,
> >
> >
> > Thanks to all of you who helped in this release [1] and for helping
> to
> > vote
> > for RC0. I have created second release candidate (RC1) for Apache
> > Hadoop
> > 3.2.0.
> >
> >
> > Artifacts for this RC are available here:
> >
> > http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
> >
> >
> > RC tag in git is release-3.2.0-RC1.
> >
> >
> >
> > The maven artifacts are available via repository.apache.org at
> >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1178/
> >
> >
> > This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59
> pm
> > PST.
> >
> >
> >
> > 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
> > additions
> >
> > are the highlights of this release.
> >
> > 1. Node Attributes Support in YARN
> >
> > 2. Hadoop Submarine project for running Deep Learning workloads on
> YARN
> >
> > 3. Support service upgrade via YARN Service API and CLI
> >
> > 4. HDFS Storage Policy Satisfier
> >
> > 5. Support Windows Azure Storage - Blob file system in Hadoop
> >
> > 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
> >
> > 7. Improvements in Router-based HDFS federation
> >
> >
> >
> > Thanks to Wangda, Vinod, Marton for helping me in preparing the
> > release.
> >
> > I have done few testing with my pseudo cluster. My +1 to start.
> >
> >
> >
> > Regards,
> >
> > Sunil
> >
> >
> >
> > [1]
> >
> >
> >
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
> >
> > [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> > (3.2.0)
> > AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status =
> Resolved
> > ORDER BY fixVersion ASC
> >
> >
> >
>


[jira] [Created] (HDDS-975) Create ozone cli commands for ozone shell

2019-01-10 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-975:
---

 Summary: Create ozone cli commands for ozone shell
 Key: HDDS-975
 URL: https://issues.apache.org/jira/browse/HDDS-975
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar
Assignee: Ajay Kumar


Create ozone cli commands for ozone shell



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14199) make output of "dfs -getfattr -R -d " differentiate folder, file and symbol link

2019-01-10 Thread Zang Lin (JIRA)
Zang Lin created HDFS-14199:
---

 Summary: make output of "dfs  -getfattr -R -d " differentiate 
folder, file and symbol link
 Key: HDFS-14199
 URL: https://issues.apache.org/jira/browse/HDFS-14199
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: shell
Affects Versions: 3.2.0, 3.3.0
Reporter: Zang Lin


The current output of  "hdfs dfs  -getfattr -R -d" print all type of file with 
"file:" , it doesn't differentiate the type such as folder, symbol link.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-976) Support YAML format network topology cluster definition

2019-01-10 Thread Sammi Chen (JIRA)
Sammi Chen created HDDS-976:
---

 Summary: Support YAML format network topology cluster definition
 Key: HDDS-976
 URL: https://issues.apache.org/jira/browse/HDDS-976
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Sammi Chen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org