Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/

[Jan 8, 2019 5:51:30 AM] (aajisaka) Revert "HADOOP-14556. S3A to support 
Delegation Tokens."
[Jan 8, 2019 6:04:06 AM] (bharat) HDDS-926. Use Timeout rule for the the test 
methods in TestOzoneManager.
[Jan 8, 2019 6:30:53 AM] (wwei) YARN-9037. [CSI] Ignore volume resource in 
resource calculators based on
[Jan 8, 2019 11:57:57 AM] (stevel) HADOOP-16018. DistCp won't reassemble chunks 
when blocks per chunk > 0.
[Jan 8, 2019 1:54:38 PM] (elek) HDDS-968. Fix TestObjectPut failures. 
Contributed by Bharat Viswanadham.
[Jan 8, 2019 2:12:08 PM] (elek) HDDS-969. Fix TestOzoneManagerRatisServer test 
failure. Contributed by
[Jan 8, 2019 2:48:48 PM] (elek) HDDS-924. MultipartUpload: S3 APi for complete 
Multipart Upload.
[Jan 8, 2019 6:38:06 PM] (gifuma) HDFS-14189. Fix intermittent failure of 
TestNameNodeMetrics. Contributed
[Jan 8, 2019 9:27:19 PM] (elek) HDDS-965. Ozone: checkstyle improvements and 
code quality scripts.
[Jan 8, 2019 10:54:05 PM] (jlowe) YARN-6523. Optimize system credentials sent 
in node heartbeat responses.
[Jan 9, 2019 1:04:57 AM] (templedf) HDFS-14132. Add BlockLocation.isStriped() 
to determine if block is




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestReadWriteDiskValidator 
   hadoop.security.ssl.TestSSLFactory 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.resourcemanager.security.TestAMRMTokens 
   hadoop.mapreduce.security.ssl.TestEncryptedShuffle 
   hadoop.mapred.TestReduceFetchFromPartialMem 
   hadoop.mapreduce.TestYarnClientProtocolProvider 
   hadoop.mapred.TestMRIntermediateDataEncryption 
   hadoop.mapreduce.TestMapReduceLazyOutput 
   hadoop.mapreduce.v2.TestMRJobs 
   hadoop.mapred.TestReduceFetch 
   hadoop.mapred.TestLazyOutput 
   hadoop.mapred.TestClusterMapReduceTestCase 
   hadoop.mapreduce.v2.TestMiniMRProxyUser 
   hadoop.mapreduce.TestMRJobClient 
   hadoop.mapred.TestMerge 
   hadoop.mapreduce.security.TestMRCredentials 
   hadoop.mapreduce.security.TestBinaryTokenFile 
   hadoop.mapred.TestJobName 
   hadoop.yarn.service.TestServiceAM 
   hadoop.streaming.TestFileArgs 
   hadoop.streaming.TestMultipleCachefiles 
   hadoop.streaming.TestSymLink 
   hadoop.streaming.TestMultipleArchiveFiles 
   hadoop.mapred.gridmix.TestDistCacheEmulation 
   hadoop.mapred.gridmix.TestGridmixSubmission 
   hadoop.mapred.gridmix.TestSleepJob 
   hadoop.mapred.gridmix.TestLoadJob 
   hadoop.tools.TestDistCh 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1011/

[jira] [Resolved] (HDDS-942) Fix TestOzoneConfigrationFields

2019-01-09 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar resolved HDDS-942.
-
Resolution: Duplicate

> Fix TestOzoneConfigrationFields 
> 
>
> Key: HDDS-942
> URL: https://issues.apache.org/jira/browse/HDDS-942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>
> A bunch of fields is missing in the ozone-default.xml in the HDDS-4 branch. 
> We need to fix these before the merge.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-947) Implement OzoneManager State Machine

2019-01-09 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reopened HDDS-947:
-

The build was failing with below error. I have reverted the patch. I will post 
a patch after fixing the compilation error.
{code}[ERROR] 
/Users/hkoneru/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientWithRatis.java:[49,5]
 cluster has private access in 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract
{code}

> Implement OzoneManager State Machine
> 
>
> Key: HDDS-947
> URL: https://issues.apache.org/jira/browse/HDDS-947
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-947.000.patch, HDDS-947.001.patch, 
> HDDS-947.002.patch, HDDS-947.003.patch, HDDS-947.004.patch
>
>
> OM Ratis server would call OM State Machine to apply the committed 
> transactions. The State Machine processes the transaction and updates the 
> state of OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14194) Mention HDFS ACL incompatible changes more explicitly

2019-01-09 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-14194:
--

 Summary: Mention HDFS ACL incompatible changes more explicitly
 Key: HDFS-14194
 URL: https://issues.apache.org/jira/browse/HDFS-14194
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation, namenode
Affects Versions: 3.0.0-beta1
Reporter: Wei-Chiu Chuang


HDFS-11957 enabled POSIX ACL inheritance by default, setting 
dfs.namenode.posix.acl.inheritance.enabled.

Even though it was documented in the ACL doc, it is not explicit. Users upgrade 
to Hadoop 3.0 and beyond will be caught in surprise. The doc should be updated 
to make it clear, preferably with examples to show what to expect, so that 
search engines can hopefully find the doc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-970) Fix classnotfound error for bouncy castle classes in OM,SCM init

2019-01-09 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-970:
---

 Summary: Fix classnotfound error for bouncy castle classes in 
OM,SCM init
 Key: HDDS-970
 URL: https://issues.apache.org/jira/browse/HDDS-970
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar
Assignee: Ajay Kumar


Fix classnotfound error for bouncy castle classes in OM,SCM init.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14084) Need for more stats in DFSClient

2019-01-09 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HDFS-14084:
---

I reverted this from trunk, branch-3.2, branch-3.1, branch-3.1.2, and 
branch-3.0.  Heads up to [~leftnoteasy] as this will impact the 3.1.2 release 
process and require a new release candidate to be built if one was already 
created.


> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch, 
> HDFS-14084.003.patch, HDFS-14084.004.patch, HDFS-14084.005.patch, 
> HDFS-14084.006.patch, HDFS-14084.007.patch, HDFS-14084.008.patch, 
> HDFS-14084.009.patch, HDFS-14084.010.patch, HDFS-14084.011.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-09 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on RHEL 7
- Deployed a single node pseudo cluster
- Ran some example jobs
- Verified the Docker environment works with non-entrypoint mode

Eric

On Tue, Jan 8, 2019 at 5:42 AM Sunil G  wrote:

> Hi folks,
>
>
> Thanks to all of you who helped in this release [1] and for helping to vote
> for RC0. I have created second release candidate (RC1) for Apache Hadoop
> 3.2.0.
>
>
> Artifacts for this RC are available here:
>
> http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
>
>
> RC tag in git is release-3.2.0-RC1.
>
>
>
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1178/
>
>
> This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm PST.
>
>
>
> 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
> additions
>
> are the highlights of this release.
>
> 1. Node Attributes Support in YARN
>
> 2. Hadoop Submarine project for running Deep Learning workloads on YARN
>
> 3. Support service upgrade via YARN Service API and CLI
>
> 4. HDFS Storage Policy Satisfier
>
> 5. Support Windows Azure Storage - Blob file system in Hadoop
>
> 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
>
> 7. Improvements in Router-based HDFS federation
>
>
>
> Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
>
> I have done few testing with my pseudo cluster. My +1 to start.
>
>
>
> Regards,
>
> Sunil
>
>
>
> [1]
>
>
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
>
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> ORDER BY fixVersion ASC
>


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-09 Thread Wilfred Spiegelenburg
+1 (non binding)

- build from source on MacOSX 10.14.2, 1.8.0u181
- successful native build on Ubuntu 16.04.3
- confirmed the checksum and signature
- deployed a single node cluster  (openjdk 1.8u191 / centos 7.5)
- uploaded the MR framework
- configured YARN with the FS
- ran multiple MR jobs

> On 8 Jan 2019, at 22:42, Sunil G  wrote:
> 
> Hi folks,
> 
> 
> Thanks to all of you who helped in this release [1] and for helping to vote
> for RC0. I have created second release candidate (RC1) for Apache Hadoop
> 3.2.0.
> 
> 
> Artifacts for this RC are available here:
> 
> http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
> 
> 
> RC tag in git is release-3.2.0-RC1.
> 
> 
> 
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1178/
> 
> 
> This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm PST.
> 
> 
> 
> 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
> additions
> 
> are the highlights of this release.
> 
> 1. Node Attributes Support in YARN
> 
> 2. Hadoop Submarine project for running Deep Learning workloads on YARN
> 
> 3. Support service upgrade via YARN Service API and CLI
> 
> 4. HDFS Storage Policy Satisfier
> 
> 5. Support Windows Azure Storage - Blob file system in Hadoop
> 
> 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
> 
> 7. Improvements in Router-based HDFS federation
> 
> 
> 
> Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
> 
> I have done few testing with my pseudo cluster. My +1 to start.
> 
> 
> 
> Regards,
> 
> Sunil
> 
> 
> 
> [1]
> 
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
> 
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> ORDER BY fixVersion ASC


Wilfred Spiegelenburg | Software Engineer
cloudera.com 









Re: [Discuss] - HDDS-4 Branch merge

2019-01-09 Thread Mukul Kumar Singh
+1, This is a great feature addition to Ozone.

Thanks,
Mukul

On 1/9/19, 12:52 PM, "Jitendra Pandey"  wrote:

+1, This is a great effort and another milestone in making HDDS and Ozone 
ready for enterprises.

On 1/7/19, 5:10 PM, "Anu Engineer"  wrote:

Hi All,

I would like to propose a merge of HDDS-4 branch to the Hadoop trunk.
HDDS-4 branch implements the security work for HDDS and Ozone.

HDDS-4 branch contains the following features:
- Hadoop Kerberos and Tokens support
- A Certificate infrastructure used by Ozone and HDDS.
- Audit Logging and parsing support (Spread across trunk and HDDS-4)
- S3 Security Support - AWS Signature Support.
- Apache Ranger Support for Ozone

I will follow up with a formal vote later this week if I hear no
objections. AFAIK, the changes are isolated to HDDS/Ozone and should not
impact any other Hadoop project.

Thanks
Anu



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




Re: [Discuss] - HDDS-4 Branch merge

2019-01-09 Thread Dinesh Chitlangia
+1 (non-binding) – Superb feature additions to ozone.

Thanks,
Dinesh



On 1/10/19, 1:21 AM, "Mukul Kumar Singh"  wrote:

+1, This is a great feature addition to Ozone.

Thanks,
Mukul

On 1/9/19, 12:52 PM, "Jitendra Pandey"  wrote:

+1, This is a great effort and another milestone in making HDDS and 
Ozone ready for enterprises.

On 1/7/19, 5:10 PM, "Anu Engineer"  wrote:

Hi All,

I would like to propose a merge of HDDS-4 branch to the Hadoop 
trunk.
HDDS-4 branch implements the security work for HDDS and Ozone.

HDDS-4 branch contains the following features:
- Hadoop Kerberos and Tokens support
- A Certificate infrastructure used by Ozone and HDDS.
- Audit Logging and parsing support (Spread across trunk and 
HDDS-4)
- S3 Security Support - AWS Signature Support.
- Apache Ranger Support for Ozone

I will follow up with a formal vote later this week if I hear no
objections. AFAIK, the changes are isolated to HDDS/Ozone and 
should not
impact any other Hadoop project.

Thanks
Anu



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org