Re: request for wiki edit privilege

2019-01-22 Thread Akira Ajisaka
Granted.

Thanks,
Akira

2019年1月22日(火) 16:04 Doroszlai, Attila :
>
> Hi,
>
> I'd like to fix some typos and dead links in the wiki.  Could you
> please grant me edit privilege?
> Username: adoroszlai
>
> thanks,
> Attila
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-22 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-14223:
---

 Summary: RBF: Add configuration documents for using multiple 
sub-clusters
 Key: HDFS-14223
 URL: https://issues.apache.org/jira/browse/HDFS-14223
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Takanobu Asanuma


When using multiple sub-clusters for a mount point, we need to set 
{{MultipleDestinationMountTableResolver}} to 
{{dfs.federation.router.file.resolver.client.class}}. The current documents 
lack of the explanation. We should add it to HDFSRouterFederation.md and 
hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[ANNOUNCE] Apache Hadoop 3.2.0 release

2019-01-22 Thread Sunil G
Greetings all,

It gives me great pleasure to announce that the Apache Hadoop community has
voted to
release Apache Hadoop 3.2.0.

Apache Hadoop 3.2.0 is the first release of Apache Hadoop 3.2 line for the
year 2019,
which includes 1092 fixes since previous Hadoop 3.1.0 release.
Of these fixes:
   - 230 in Hadoop Common
   - 344 in HDFS
   - 484 in YARN
   - 34 in MapReduce

Apache Hadoop 3.2.0 contains a number of significant features and
enhancements.
A few of them are noted as below.

- ABFS Filesystem connector : supports the latest Azure Datalake Gen2
Storage.
- Enhanced S3A connector : including better resilience to throttled AWS S3
and
  DynamoDB IO.
- Node Attributes Support in YARN : helps to tag multiple labels on the
nodes based
  on its attributes and supports placing the containers based on expression
of these labels.
- Storage Policy Satisfier : supports HDFS (Hadoop Distributed File System)
applications to
  move the blocks between storage types as they set the storage policies on
files/directories.
- Hadoop Submarine : enables data engineers to easily develop, train and
deploy deep learning
  models (in TensorFlow) on very same Hadoop YARN cluster.
- C++ HDFS client : helps to do async IO to HDFS which helps downstream
projects such as
  Apache ORC;
- Upgrades for long running services : supports in-place seamless upgrades
of long running
  containers via YARN Native Service API and CLI.

* For major changes included in Hadoop 3.2 line, please refer Hadoop
3.2.0 main page [1].
* For more details about fixes in 3.2.0 release, please read the
CHANGELOG [2] and RELEASENOTES [3].

The release news is posted on the Hadoop website too, you can go to
the downloads section directly [4].

Many thanks to everyone who contributed to the release, and everyone in the
Apache Hadoop community! This release is a direct result of your great
contributions.
Many thanks to Wangda Tan, Vinod Kumar Vavilapalli and Marton Elek who
helped in
this release process.

[1] https://hadoop.apache.org/docs/r3.2.0/
[2]
https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/release/3.2.0/CHANGELOG.3.2.0.html
[3]
https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/release/3.2.0/RELEASENOTES.3.2.0.html
[4] https://hadoop.apache.org/releases.html

Many Thanks,
Sunil Govindan


proposed new repository for hadoop/ozone docker images (+update on docker works)

2019-01-22 Thread Elek, Marton



TLDR;

I proposed to create a separated git repository for ozone docker images
in HDDS-851 (hadoop-docker-ozone.git)

If there is no objections in the next 3 days I will ask an Apache Member
to create the repository.




LONG VERSION:

In HADOOP-14898 multiple docker containers and helper scripts are
created for Hadoop.

The main goal was to:

 1.) help the development with easy-to-use docker images
 2.) provide official hadoop images to make it easy to test new features

As of now we have:

 - apache/hadoop-runner image (which contains the required dependency
but no hadoop)
 - apache/hadoop:2 and apache/hadoop:3 images (to try out latest hadoop
from 2/3 lines)

The base image to run hadoop (apache/hadoop-runner) is also heavily used
for Ozone distribution/development.

The Ozone distribution contains docker-compose based cluster definitions
to start various type of clusters and scripts to do smoketesting. (See
HADOOP-16063 for more details).

Note: I personally believe that these definitions help a lot to start
different type of clusters. For example it could be tricky to try out
router based federation as it requires multiple HA clusters. But with a
simple docker-compose definition [1] it could be started under 3
minutes. (HADOOP-16063 is about creating these definitions for various
hdfs/yarn use cases)

As of now we have dedicated branches in the hadoop git repository for
the docker images (docker-hadoop-runner, docker-hadoop-2,
docker-hadoop-3). It turns out that a separated repository would be more
effective as the dockerhub can use only full branch names as tags.

We would like to provide ozone docker images to make the evaluation as
easy as 'docker run -d apache/hadoop-ozone:0.3.0', therefore in HDDS-851
we agreed to create a separated repository for the hadoop-ozone docker
images.

If this approach works well we can also move out the existing
docker-hadoop-2/docker-hadoop-3/docker-hadoop-runner branches from
hadoop.git to an other separated hadoop-docker.git repository)

Please let me know if you have any comments,

Thanks,
Marton

1: see
https://github.com/flokkr/runtime-compose/tree/master/hdfs/routerfeder
as an example

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/

[Jan 21, 2019 1:54:58 AM] (tasanuma) HADOOP-16046. [JDK 11] Correct the 
compiler exclusion of
[Jan 21, 2019 8:54:14 AM] (wwei) YARN-9204. RM fails to start if absolute 
resource is specified for
[Jan 21, 2019 3:54:51 PM] (sunilg) Make 3.2.0 aware to other branches
[Jan 21, 2019 5:11:26 PM] (sunilg) Make 3.2.0 aware to other branches - jdiff
[Jan 22, 2019 1:19:05 AM] (aajisaka) HADOOP-15787. [JDK11] 
TestIPC.testRTEDuringConnectionSetup fails.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [328K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [84K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifac

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-22 Thread Brian Demers
Anyone else getting timeout errors with the MIT keypool? The
ubuntu keypool seems ok

On Tue, Jan 22, 2019 at 1:28 AM Wangda Tan  wrote:

> It seems there's no useful information from the log :(. Maybe I should
> change my key and try again. In the meantime, Sunil will help me to create
> release and get 3.1.2 out.
>
> Thanks everybody for helping with this, really appreciate it!
>
> Best,
> Wangda
>
> On Mon, Jan 21, 2019 at 9:55 PM Chris Lambertus  wrote:
>
>> 2019-01-22 05:40:41 INFO  [99598137-805273] -
>> com.sonatype.nexus.staging.internal.DefaultStagingManager - Dropping
>> staging repositories [orgapachehadoop-1201]
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> com.sonatype.nexus.staging.internal.task.StagingBackgroundTask - STARTED
>> Dropping staging repositories: [orgapachehadoop-1201]
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> com.sonatype.nexus.staging.internal.task.RepositoryDropTask - Dropping:
>> DropItem{id=orgapachehadoop-1201, state=open, group=false}
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.proxy.registry.DefaultRepositoryRegistry - Removed
>> repository "orgapachehadoop-1201 (staging: open)"
>> [id=orgapachehadoop-1201][contentClass=Maven2][mainFacet=org.sonatype.nexus.proxy.maven.MavenHostedRepository]
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration due to changes in [Repository Grouping
>> Configuration] made by *TASK...
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> com.sonatype.nexus.staging.internal.task.StagingBackgroundTask - FINISHED
>> Dropping staging repositories: [orgapachehadoop-1201]
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration due to changes in [Scheduled Tasks] made by
>> *TASK...
>> 2019-01-22 05:40:42 INFO  [pool-1-thread-3] -
>> org.sonatype.nexus.tasks.DeleteRepositoryFoldersTask - Scheduled task
>> (DeleteRepositoryFoldersTask) started :: Deleting folders with repository
>> ID: orgapachehadoop-1201
>> 2019-01-22 05:40:42 INFO  [pool-1-thread-3] -
>> org.sonatype.nexus.tasks.DeleteRepositoryFoldersTask - Scheduled task
>> (DeleteRepositoryFoldersTask) finished :: Deleting folders with repository
>> ID: orgapachehadoop-1201 (started 2019-01-22T05:40:42+00:00, runtime
>> 0:00:00.023)
>> 2019-01-22 05:40:42 INFO  [pool-1-thread-3] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration due to changes in [Scheduled Tasks] made by
>> *TASK...
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> com.sonatype.nexus.staging.internal.DefaultStagingManager - Creating
>> staging repository under profile id = '6a441994c87797' for deploy
>> RestDeployRequest
>> [path=/org/apache/hadoop/hadoop-main/3.1.2/hadoop-main-3.1.2.pom,
>> repositoryType=maven2, action=create, acceptMode=DEPLOY] (explicit=false)
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.proxy.maven.routing.internal.ManagerImpl - Initializing
>> non-existing prefix file of newly added "orgapachehadoop-1202 (staging:
>> open)" [id=orgapachehadoop-1202]
>> 2019-01-22 05:40:50 INFO  [ar-7-thread-5  ] -
>> org.sonatype.nexus.proxy.maven.routing.internal.ManagerImpl - Updated and
>> published prefix file of "orgapachehadoop-1202 (staging: open)"
>> [id=orgapachehadoop-1202]
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.proxy.registry.DefaultRepositoryRegistry - Added
>> repository "orgapachehadoop-1202 (staging: open)"
>> [id=orgapachehadoop-1202][contentClass=Maven2][mainFacet=org.sonatype.nexus.proxy.maven.MavenHostedRepository]
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration made by wangda...
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration due to changes in [orgapachehadoop-1202
>> 

Re: Cannot kill Pre-Commit jenkins builds

2019-01-22 Thread Arun Suresh
Hey Vinod.. Ping!

Cheers
-Arun

On Fri, Jan 18, 2019 at 9:46 AM Arun Suresh  wrote:

> Hi Vinod
>
> Can you please help with this:
> https://issues.apache.org/jira/browse/INFRA-17673 ?
>
> Cheers
> -Arun
>
> On Wed, Jan 16, 2019, 12:53 PM Arun Suresh 
>> Hi
>>
>> We are currently trying to get the branch-2 pre-commit builds working.
>> I used to be able to kill Pre-Commit jenkins jobs, but looks like I am
>> not allowed to anymore. Has anything changed recently w.r.t permissions etc
>> ?
>>
>> Cheers
>> -Arun
>>
>


[jira] [Created] (HDDS-990) Typos in Ozone doc

2019-01-22 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-990:
--

 Summary: Typos in Ozone doc
 Key: HDDS-990
 URL: https://issues.apache.org/jira/browse/HDDS-990
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: documentation
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Fix the following issues in {{hadoop-hdds/docs/content}}:
 # {{bucket delete}} description and example references volume instead
 # {{compose/ozone}} doesn't launch Namenode, only {{compose/ozone-hdfs}} does
 # Java API example doesn't compile:
 #* use regular quotes instead of "word-processor" ones
 #* typo in variable and class names
 # {{delete key}} -> {{key delete}}
 # various other typos



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-22 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-991:
---

 Summary: Fix failures in TestSecureOzoneCluster
 Key: HDDS-991
 URL: https://issues.apache.org/jira/browse/HDDS-991
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Ajay Kumar
Assignee: Ajay Kumar


Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-992) ozone-default.xml has invalid text from a stale merge

2019-01-22 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-992:
--

 Summary: ozone-default.xml has invalid text from a stale merge
 Key: HDDS-992
 URL: https://issues.apache.org/jira/browse/HDDS-992
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


This jira aims to remove the invalid text "===" and ">>" from 
[https://github.com/apache/hadoop/blob/trunk/hadoop-hdds/common/src/main/resources/ozone-default.xml]
{code:java}

ozone.scm.kerberos.keytab.file
===
hdds.scm.kerberos.keytab.file
>>> HDDS-70. Fix config names for secure ksm and scm. Contributed by Ajay 
>>> Kumar.

 OZONE, SECURITY
 The keytab file used by each SCM daemon to login as its
service principal. The principal name is configured with
hdds.scm.kerberos.principal.

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Cannot kill Pre-Commit jenkins builds

2019-01-22 Thread Vinod Kumar Vavilapalli
Minus private.

Which specific job you are looking? I looked at 
https://builds.apache.org/job/PreCommit-YARN-Build/ 
 but can't seem to find 
any user specific auth.

+Vinod

> On Jan 22, 2019, at 10:00 AM, Arun Suresh  wrote:
> 
> Hey Vinod.. Ping!
> 
> Cheers
> -Arun
> 
> On Fri, Jan 18, 2019 at 9:46 AM Arun Suresh  wrote:
> 
>> Hi Vinod
>> 
>> Can you please help with this:
>> https://issues.apache.org/jira/browse/INFRA-17673 ?
>> 
>> Cheers
>> -Arun
>> 
>> On Wed, Jan 16, 2019, 12:53 PM Arun Suresh > 
>>> Hi
>>> 
>>> We are currently trying to get the branch-2 pre-commit builds working.
>>> I used to be able to kill Pre-Commit jenkins jobs, but looks like I am
>>> not allowed to anymore. Has anything changed recently w.r.t permissions etc
>>> ?
>>> 
>>> Cheers
>>> -Arun
>>> 
>> 



[jira] [Created] (HDDS-993) Update hadoop version to 3.2.0

2019-01-22 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-993:
---

 Summary: Update hadoop version to 3.2.0
 Key: HDDS-993
 URL: https://issues.apache.org/jira/browse/HDDS-993
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to update Hadoop version to 3.2.0 and cleanup related to snapshot 
repository in ozone module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Cannot kill Pre-Commit jenkins builds

2019-01-22 Thread Arun Suresh
Hmmm.. as per this (https://wiki.apache.org/general/Jenkins), looks like my
id needs to be added to the hudson-jobadmin group to affect any changes on
jenkins.
But wondering why it was revoked in the first place.

On Tue, Jan 22, 2019 at 4:21 PM Vinod Kumar Vavilapalli 
wrote:

> Minus private.
>
> Which specific job you are looking? I looked at
> https://builds.apache.org/job/PreCommit-YARN-Build/ but can't seem to
> find any user specific auth.
>
> +Vinod
>
> On Jan 22, 2019, at 10:00 AM, Arun Suresh  wrote:
>
> Hey Vinod.. Ping!
>
> Cheers
> -Arun
>
> On Fri, Jan 18, 2019 at 9:46 AM Arun Suresh  wrote:
>
> Hi Vinod
>
> Can you please help with this:
> https://issues.apache.org/jira/browse/INFRA-17673 ?
>
> Cheers
> -Arun
>
> On Wed, Jan 16, 2019, 12:53 PM Arun Suresh 
> Hi
>
> We are currently trying to get the branch-2 pre-commit builds working.
> I used to be able to kill Pre-Commit jenkins jobs, but looks like I am
> not allowed to anymore. Has anything changed recently w.r.t permissions etc
> ?
>
> Cheers
> -Arun
>
>
>
>


[jira] [Created] (HDDS-994) Unable to start OM from secure docker compose

2019-01-22 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-994:
---

 Summary: Unable to start OM from secure docker compose
 Key: HDDS-994
 URL: https://issues.apache.org/jira/browse/HDDS-994
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar


{code:java}
om_1        | 2019-01-23 00:50:58 ERROR OzoneManager:418 - Unable to read key 
pair for OM.

om_1        | org.apache.hadoop.ozone.security.OzoneSecurityException: Error 
reading private file for OzoneManager

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:460)

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:416)

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:980)

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:802)

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:589)

om_1        | Caused by: java.lang.NullPointerException

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:457)

om_1        | ... 4 more

om_1        | 2019-01-23 00:50:58 ERROR OzoneManager:593 - Failed to start the 
OzoneManager.

om_1        | java.lang.RuntimeException: 
org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading private 
file for OzoneManager

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:419)

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManagerIfNecessary(OzoneManager.java:980)

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.start(OzoneManager.java:802)

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:589)

om_1        | Caused by: 
org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading private 
file for OzoneManager

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:460)

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.startSecretManager(OzoneManager.java:416)

om_1        | ... 3 more

om_1        | Caused by: java.lang.NullPointerException

om_1        | at 
org.apache.hadoop.ozone.om.OzoneManager.readKeyPair(OzoneManager.java:457)

om_1        | ... 4 more

om_1        | 2019-01-23 00:50:58 INFO  ExitUtil:210 - Exiting with status 1: 
java.lang.RuntimeException: 
org.apache.hadoop.ozone.security.OzoneSecurityException: Error reading private 
file for OzoneManager{code}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14224) RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest

2019-01-22 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14224:
---

 Summary: RBF: NPE in GetContentSummary For GetEcPolicy in Case of 
Multi Dest
 Key: HDFS-14224
 URL: https://issues.apache.org/jira/browse/HDFS-14224
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Null Pointer Exception in GetContentSummary for EC policy when there are 
multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org