[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337145868
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
 
 Review comment:
   Typo: `. Which can be used to test cluster and report problems.` -> ` which 
can be used to test cluster and report problems.`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337180648
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337153250
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
 
 Review comment:
   Typo: `relase` -> `release`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337180306
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337179403
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337180306
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337182924
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337146397
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
 
 Review comment:
   NIT: Remove `in`
   Change `mail` to `email`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337182924
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (

[jira] [Created] (HDDS-2341) Validate tar entry path during extraction

2019-10-21 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2341:
--

 Summary: Validate tar entry path during extraction
 Key: HDDS-2341
 URL: https://issues.apache.org/jira/browse/HDDS-2341
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Containers extracted from tar.gz should be validated to confine entries to the 
archive's root directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2342) ContainerStateMachine$chunkExecutor threads hold onto native memory

2019-10-21 Thread Lokesh Jain (Jira)
Lokesh Jain created HDDS-2342:
-

 Summary: ContainerStateMachine$chunkExecutor threads hold onto 
native memory
 Key: HDDS-2342
 URL: https://issues.apache.org/jira/browse/HDDS-2342
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Lokesh Jain
Assignee: Lokesh Jain


In a heap dump many threads in ContainerStateMachine$chunkExecutor holds onto 
native memory in the ThreadLocal map. Every such thread holds onto chunk worth 
of DirectByteBuffer. Since these threads are involved in write and read chunk 
operations, the JVM allocates chunk (16MB) worth of DirectByteBuffer in the 
ThreadLocalMap for every thread involved in IO. Also the native memory would 
not be GC'ed as long as the thread is alive.

It would be better to reduce the default number of chunk executor threads and 
have them in proportion to number of disks on the datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2343) Add immutable entries in to the DoubleBuffer.

2019-10-21 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2343:


 Summary: Add immutable entries in to the DoubleBuffer.
 Key: HDDS-2343
 URL: https://issues.apache.org/jira/browse/HDDS-2343
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


OMBucketCreateRequest.java L181:

omClientResponse =
 new OMBucketCreateResponse(omBucketInfo,
 omResponse.build());

 

We add this to double-buffer, and double-buffer flushThread which is running in 
the background when picks up, converts to protoBuf and to ByteArray and write 
to rocksDB tables. So, during this conversion, if any other request changes 
internal structure(like acls list) of OMBucketInfo we might get 
ConcurrentModificationException.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #9: HDDS-2240. Command line tool for OM Admin

2019-10-21 Thread GitBox
hanishakoneru commented on issue #9: HDDS-2240. Command line tool for OM Admin
URL: https://github.com/apache/hadoop-ozone/pull/9#issuecomment-544712412
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #67: HDDS-2340. Updated ratis.version to get latest snapshot.

2019-10-21 Thread GitBox
bharatviswa504 merged pull request #67: HDDS-2340. Updated ratis.version to get 
latest snapshot.
URL: https://github.com/apache/hadoop-ozone/pull/67
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2340) Update RATIS snapshot version

2019-10-21 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2340.
--
Resolution: Fixed

> Update RATIS snapshot version
> -
>
> Key: HDDS-2340
> URL: https://issues.apache.org/jira/browse/HDDS-2340
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update RATIS version to incorporate fix that went into RATIS-707 among others.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #69: HDDS-2343. Add immutable entries in to the DoubleBuffer for Bucket requests.

2019-10-21 Thread GitBox
bharatviswa504 opened a new pull request #69: HDDS-2343. Add immutable entries 
in to the DoubleBuffer for Bucket requests.
URL: https://github.com/apache/hadoop-ozone/pull/69
 
 
   
   =## What changes were proposed in this pull request?
   
   ```
   OMBucketCreateRequest.java L181:
   
   omClientResponse =
   new OMBucketCreateResponse(omBucketInfo,
   omResponse.build());
   ```
   

   We add this to double-buffer, and double-buffer flushThread which is running 
in the background when picks up, converts to protoBuf and to ByteArray and 
write to rocksDB tables. So, during this conversion(This conversion will be 
done without any lock acquire), if any other request changes internal 
structure(like acls list) of OMBucketInfo we might get 
ConcurrentModificationException.
   
   In this Jira handled Bucket requests, when adding entries to doubleBuffer 
cloned them and added to it. 
   
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2343
   
   ## How was this patch tested?
   
   Ran TestOzoneRpcClient which tests this code path.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #59: HDDS-2327. Provide new Freon test to test Ratis pipeline with pure XceiverClientRatis

2019-10-21 Thread GitBox
xiaoyuyao commented on a change in pull request #59: HDDS-2327. Provide new 
Freon test to test Ratis pipeline with pure XceiverClientRatis
URL: https://github.com/apache/hadoop-ozone/pull/59#discussion_r337273215
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkGenerator.java
 ##
 @@ -0,0 +1,163 @@
+package org.apache.hadoop.ozone.freon;
+
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumData;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.DatanodeBlockID;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.WriteChunkRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol;
+import org.apache.hadoop.ozone.OzoneSecurityUtil;
+import org.apache.hadoop.ozone.common.Checksum;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator to use pure datanode XCeiver interface.
+ */
+@Command(name = "dcg",
+aliases = "datanode-chunk-generator",
+description = "Create as many chunks as possible with pure XCeiverClient.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class DatanodeChunkGenerator extends BaseFreonGenerator implements
+Callable {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeChunkGenerator.class);
+
+  @Option(names = {"-s", "--size"},
+  description = "Size of the generated chunks (in bytes)",
+  defaultValue = "1024")
+  private int chunkSize;
+
+  @Option(names = {"-l", "--pipeline"},
 
 Review comment:
   NIT: can we use -p to specify pipeline to be consistent?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #27: HDDS-2278. Run S3 test suite on OM HA cluster.

2019-10-21 Thread GitBox
bharatviswa504 commented on issue #27: HDDS-2278. Run S3 test suite on OM HA 
cluster.
URL: https://github.com/apache/hadoop-ozone/pull/27#issuecomment-544741861
 
 
   Rebased with the latest master(after ratis version upgrade)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #59: HDDS-2327. Provide new Freon test to test Ratis pipeline with pure XceiverClientRatis

2019-10-21 Thread GitBox
xiaoyuyao commented on a change in pull request #59: HDDS-2327. Provide new 
Freon test to test Ratis pipeline with pure XceiverClientRatis
URL: https://github.com/apache/hadoop-ozone/pull/59#discussion_r337278417
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkGenerator.java
 ##
 @@ -0,0 +1,163 @@
+package org.apache.hadoop.ozone.freon;
+
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumData;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.DatanodeBlockID;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.WriteChunkRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol;
+import org.apache.hadoop.ozone.OzoneSecurityUtil;
+import org.apache.hadoop.ozone.common.Checksum;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator to use pure datanode XCeiver interface.
+ */
+@Command(name = "dcg",
+aliases = "datanode-chunk-generator",
+description = "Create as many chunks as possible with pure XCeiverClient.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class DatanodeChunkGenerator extends BaseFreonGenerator implements
+Callable {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeChunkGenerator.class);
+
+  @Option(names = {"-s", "--size"},
+  description = "Size of the generated chunks (in bytes)",
+  defaultValue = "1024")
+  private int chunkSize;
+
+  @Option(names = {"-l", "--pipeline"},
+  description = "Pipeline to use. By default the first RATIS/THREE "
+  + "pipeline will be used.",
+  defaultValue = "")
+  private String pipelineId;
+
+  private XceiverClientSpi xceiverClientSpi;
+
+  private Timer timer;
+
+  private ByteString dataToWrite;
+  private ChecksumData checksumProtobuf;
+
+  @Override
+  public Void call() throws Exception {
+
+init();
+
+OzoneConfiguration ozoneConf = createOzoneConfiguration();
+if (OzoneSecurityUtil.isSecurityEnabled(ozoneConf)) {
+  throw new IllegalArgumentException(
+  "Datanode chunk generator is not supported in secure environment");
+}
+
+StorageContainerLocationProtocol scmLocationClient =
+createStorageContainerLocationClient(ozoneConf);
+List pipelines = scmLocationClient.listPipelines();
+
+Pipeline pipeline;
+if (pipelineId != null && pipelineId.length() > 0) {
+  pipeline = pipelines.stream()
+  .filter(p -> p.getId().toString().equals(pipelineId))
+  .findFirst()
+  .orElseThrow(() -> new IllegalArgumentException(
+  "Pipeline ID is defined, but there is no such pipeline: "
+  + pipelineId));
+
+} else {
+  pipeline = pipelines.stream()
+  .filter(p -> p.getFactor() == ReplicationFactor.THREE)
+  .findFirst()
+  .orElseThrow(() -> new IllegalArgumentException(
+  "Pipeline ID is defined, and no pipeline has been found with "
 
 Review comment:
   I think the 95-95 should be "pipeline ID is not defined, and no pipeline has 
been found with ..."


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on issue #59: HDDS-2327. Provide new Freon test to test Ratis pipeline with pure XceiverClientRatis

2019-10-21 Thread GitBox
xiaoyuyao commented on issue #59: HDDS-2327. Provide new Freon test to test 
Ratis pipeline with pure XceiverClientRatis
URL: https://github.com/apache/hadoop-ozone/pull/59#issuecomment-544744985
 
 
   Thanks @elek  for the PR. That LGTM. Just few minor comments in addition to 
@anuengineer 's comments. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #9: HDDS-2240. Command line tool for OM Admin

2019-10-21 Thread GitBox
bharatviswa504 commented on a change in pull request #9: HDDS-2240. Command 
line tool for OM Admin
URL: https://github.com/apache/hadoop-ozone/pull/9#discussion_r337278663
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -223,6 +226,20 @@ public RpcClient(Configuration conf, String omServiceId) 
throws IOException {
 OzoneConfigKeys.OZONE_NETWORK_TOPOLOGY_AWARE_READ_DEFAULT);
   }
 
+  @Override
+  public List getOmRoleInfos() throws IOException {
 
 Review comment:
   If some user call's this API after a period of time the information in 
OMRoleInfo might be stale. If we expose here, I think we should contact OM and 
get this info again.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12049) Recommissioning live nodes stalls the NN

2019-10-21 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-12049.

Resolution: Duplicate

> Recommissioning live nodes stalls the NN
> 
>
> Key: HDFS-12049
> URL: https://issues.apache.org/jira/browse/HDFS-12049
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> A node refresh will recommission included nodes that are alive and in 
> decommissioning or decommissioned state.  The recommission will scan all 
> blocks on the node, find over replicated blocks, chose an excess, queue an 
> invalidate.
> The process is expensive and worsened by overhead of storage types (even when 
> not in use).  It can be especially devastating because the write lock is held 
> for the entire node refresh.  _Recommissioning 67 nodes with ~500k 
> blocks/node stalled rpc services for over 4 mins._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #68: HDDS-2320. Negative value seen for OM NumKeys Metric in JMX.

2019-10-21 Thread GitBox
bharatviswa504 merged pull request #68: HDDS-2320. Negative value seen for OM 
NumKeys Metric in JMX.
URL: https://github.com/apache/hadoop-ozone/pull/68
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #68: HDDS-2320. Negative value seen for OM NumKeys Metric in JMX.

2019-10-21 Thread GitBox
bharatviswa504 commented on issue #68: HDDS-2320. Negative value seen for OM 
NumKeys Metric in JMX.
URL: https://github.com/apache/hadoop-ozone/pull/68#issuecomment-544751884
 
 
   Thank You @avijayanhwx for the contribution and all for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2320) Negative value seen for OM NumKeys Metric in JMX.

2019-10-21 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2320.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Negative value seen for OM NumKeys Metric in JMX.
> -
>
> Key: HDDS-2320
> URL: https://issues.apache.org/jira/browse/HDDS-2320
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: Screen Shot 2019-10-17 at 11.31.08 AM.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> While running teragen/terasort on a cluster and verifying number of keys 
> created on Ozone Manager, I noticed that the value of NumKeys counter metric 
> to be a negative value !Screen Shot 2019-10-17 at 11.31.08 AM.png! .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #49: HDDS-2310. Add support to add ozone ranger plugin to Ozone Manager cl…

2019-10-21 Thread GitBox
bharatviswa504 merged pull request #49: HDDS-2310. Add support to add ozone 
ranger plugin to Ozone Manager cl…
URL: https://github.com/apache/hadoop-ozone/pull/49
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #49: HDDS-2310. Add support to add ozone ranger plugin to Ozone Manager cl…

2019-10-21 Thread GitBox
bharatviswa504 commented on issue #49: HDDS-2310. Add support to add ozone 
ranger plugin to Ozone Manager cl…
URL: https://github.com/apache/hadoop-ozone/pull/49#issuecomment-544753350
 
 
   Thank You @vivekratnavel for the contribution and all  for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2310) Add support to add ozone ranger plugin to Ozone Manager classpath

2019-10-21 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2310.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Add support to add ozone ranger plugin to Ozone Manager classpath
> -
>
> Key: HDDS-2310
> URL: https://issues.apache.org/jira/browse/HDDS-2310
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, there is no way to add Ozone Ranger plugin to Ozone Manager 
> classpath. 
> We should be able to set an environment variable that will be respected by 
> ozone and added to Ozone Manager classpath.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm opened a new pull request #70: HDDS-1643. Send hostName also part of OMRequest.

2019-10-21 Thread GitBox
cxorm opened a new pull request #70: HDDS-1643. Send hostName also part of 
OMRequest.
URL: https://github.com/apache/hadoop-ozone/pull/70
 
 
   ## What changes were proposed in this pull request?
   ```
   OzoneManagerProtocol.proto L322:
   
   optional string hostName = 4;
   ```
   To make hostName part of OMRequest, we add this to UserInfo.
   
   And with the change, we add corresponding ```getHostName``` method and 
modify the ```checkAcls``` method.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-1643
   
   ## How was this patch tested?
   Ran the unit test "TestOMClientRequestWithUserInfo".


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 commented on a change in pull request #7: HDDS-1228. Chunk Scanner Checkpoints

2019-10-21 Thread GitBox
arp7 commented on a change in pull request #7: HDDS-1228. Chunk Scanner 
Checkpoints
URL: https://github.com/apache/hadoop-ozone/pull/7#discussion_r337312099
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataScanOrder.java
 ##
 @@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.container.common.impl;
+
+import org.apache.hadoop.ozone.container.common.interfaces.Container;
+
+import java.time.Instant;
+import java.util.Comparator;
+import java.util.Optional;
+
+/**
+ * Orders containers:
+ * 1. containers not yet scanned first,
+ * 2. then least recently scanned first,
+ * 3. ties are broken by containerID.
+ */
+public class ContainerDataScanOrder implements Comparator> {
+
+  public static final Comparator> INSTANCE =
+  new ContainerDataScanOrder();
+
+  @Override
+  public int compare(Container o1, Container o2) {
+ContainerData d1 = o1.getContainerData();
+ContainerData d2 = o2.getContainerData();
+
+Optional scan1 = d1.lastDataScanTime();
+boolean scanned1 = scan1.isPresent();
+Optional scan2 = d2.lastDataScanTime();
+boolean scanned2 = scan2.isPresent();
+
+int result = Boolean.compare(scanned1, scanned2);
+if (0 == result && scanned1 && scanned2) {
+  result = scan1.get().compareTo(scan2.get());
+}
+if (0 == result) {
+  result = Long.compare(d1.getContainerID(), d2.getContainerID());
 
 Review comment:
   I think this check is incomplete. If scanned1 is absent however scanned2 is 
present, then o1 should order before o2, irrespective of their container IDs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 merged pull request #61: HDDS-2333. Enable sync option for OM non-HA.

2019-10-21 Thread GitBox
arp7 merged pull request #61: HDDS-2333. Enable sync option for OM non-HA.
URL: https://github.com/apache/hadoop-ozone/pull/61
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 commented on issue #61: HDDS-2333. Enable sync option for OM non-HA.

2019-10-21 Thread GitBox
arp7 commented on issue #61: HDDS-2333. Enable sync option for OM non-HA.
URL: https://github.com/apache/hadoop-ozone/pull/61#issuecomment-544783160
 
 
   Merged this based on @anuengineer's review. Thanks for the review Anu and 
thanks Bharat for the contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2333) Enable sync option for OM non-HA

2019-10-21 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-2333.
-
Fix Version/s: 0.5.0
   Resolution: Fixed

Merged this with [~aengineer]'s +1 on the PR. Thanks for the review Anu and 
thanks Bharat for the contribution.

> Enable sync option for OM non-HA 
> -
>
> Key: HDDS-2333
> URL: https://issues.apache.org/jira/browse/HDDS-2333
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In OM non-HA when double buffer flushes, it should commit with sync turned 
> on. As in non-HA when power failure/system crashes, the operations which are 
> acknowledged by OM might be lost in this kind of scenario. (As in rocks DB 
> with Sync false, the flush is asynchronous and it will not persist to storage 
> system)
>  
> In HA, this is not a problem because the guarantee is provided by ratis and 
> ratis logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 commented on issue #53: HDDS-2330. Random key generator can get stuck

2019-10-21 Thread GitBox
arp7 commented on issue #53: HDDS-2330. Random key generator can get stuck
URL: https://github.com/apache/hadoop-ozone/pull/53#issuecomment-544783432
 
 
   I assume all the integration test failures are unrelated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 merged pull request #69: HDDS-2343. Add immutable entries in to the DoubleBuffer for Bucket requests.

2019-10-21 Thread GitBox
arp7 merged pull request #69: HDDS-2343. Add immutable entries in to the 
DoubleBuffer for Bucket requests.
URL: https://github.com/apache/hadoop-ozone/pull/69
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 commented on issue #69: HDDS-2343. Add immutable entries in to the DoubleBuffer for Bucket requests.

2019-10-21 Thread GitBox
arp7 commented on issue #69: HDDS-2343. Add immutable entries in to the 
DoubleBuffer for Bucket requests.
URL: https://github.com/apache/hadoop-ozone/pull/69#issuecomment-544783887
 
 
   The IT failures are seen on multiple jiras and likely to be unrelated. 
Committing this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 commented on issue #53: HDDS-2330. Random key generator can get stuck

2019-10-21 Thread GitBox
arp7 commented on issue #53: HDDS-2330. Random key generator can get stuck
URL: https://github.com/apache/hadoop-ozone/pull/53#issuecomment-544784457
 
 
   The failures are unrelated indeed. I will commit this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 merged pull request #53: HDDS-2330. Random key generator can get stuck

2019-10-21 Thread GitBox
arp7 merged pull request #53: HDDS-2330. Random key generator can get stuck
URL: https://github.com/apache/hadoop-ozone/pull/53
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[ANNOUNCE] Apache Hadoop 3.1.3 release!

2019-10-21 Thread Zhankun Tang
Hi folks,

It's a  great honor to announce that the Apache Hadoop community
has released Apache Hadoop 3.1.3.

Apache Hadoop 3.1.3 is the stable maintenance release of Apache Hadoop 3.1
line, which includes 246 fixes and improvements since Hadoop 3.1.2.

Thanks to our community!

*Apache Hadoop 3.1.3 released*: https://hadoop.apache.org/release/3.1.3.html
*Changelog*: https://s.apache.org/dqxla
*Release note*: https://s.apache.org/x3sgj

BR,
Zhankun


[jira] [Created] (HDDS-2344) CLONE - Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-21 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2344:


 Summary: CLONE - Add immutable entries in to the DoubleBuffer for 
Volume requests.
 Key: HDDS-2344
 URL: https://issues.apache.org/jira/browse/HDDS-2344
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham
 Fix For: 0.5.0


OMBucketCreateRequest.java L181:

omClientResponse =
 new OMBucketCreateResponse(omBucketInfo,
 omResponse.build());

 

We add this to double-buffer, and double-buffer flushThread which is running in 
the background when picks up, converts to protoBuf and to ByteArray and write 
to rocksDB tables. So, during this conversion(This conversion will be done 
without any lock acquire), if any other request changes internal structure(like 
acls list) of OMBucketInfo we might get ConcurrentModificationException.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-21 Thread GitBox
bharatviswa504 opened a new pull request #71: HDDS-2344. Add immutable entries 
in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71
 
 
   …quests.
   
   ## What changes were proposed in this pull request?
   
   OMVolumeCreateRequest.java L159:
   
   omClientResponse =
new OMVolumeCreateResponse(omVolumeArgs,volumeList, omResponse.build());

   
   We add this to double-buffer, and double-buffer flushThread which is running 
in the background when picks up, converts to protoBuf and to ByteArray and 
write to rocksDB tables. So, during this conversion(This conversion will be 
done without any lock acquire), if any other request changes internal 
structure(like acls list) of OmVolumeArgs we might get 
ConcurrentModificationException.
   
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2344
   
   Please replace this section with the link to the Apache JIRA)
   
   ## How was this patch tested?
   
   Ran TestOzoneRpcClient which tests this code path and also added a new test 
for clone method.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2345) Add a UT for newly added clone() in OmBucketInfo

2019-10-21 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2345:


 Summary: Add a UT for newly added clone() in OmBucketInfo
 Key: HDDS-2345
 URL: https://issues.apache.org/jira/browse/HDDS-2345
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


Add a UT for newly added clone() method in OMBucketInfo as part of HDDS-2333.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #50: HDDS-2131. Optimize replication type and creation time calculation in S3 MPU list call.

2019-10-21 Thread GitBox
bharatviswa504 commented on a change in pull request #50: HDDS-2131. Optimize 
replication type and creation time calculation in S3 MPU list call.
URL: https://github.com/apache/hadoop-ozone/pull/50#discussion_r337338976
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -893,17 +893,18 @@ private OmMultipartInfo createMultipartInfo(OmKeyArgs 
keyArgs,
   // Not checking if there is an already key for this in the keyTable, as
   // during final complete multipart upload we take care of this.
 
-
+  long currentTime = Time.now();
   Map partKeyInfoMap = new HashMap<>();
   OmMultipartKeyInfo multipartKeyInfo = new OmMultipartKeyInfo(
-  multipartUploadID, partKeyInfoMap);
+  multipartUploadID, currentTime, keyArgs.getType(),
 
 Review comment:
   This change needs to be done 
here.https://github.com/bharatviswa504/hadoop-ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java#L146
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #50: HDDS-2131. Optimize replication type and creation time calculation in S3 MPU list call.

2019-10-21 Thread GitBox
bharatviswa504 commented on a change in pull request #50: HDDS-2131. Optimize 
replication type and creation time calculation in S3 MPU list call.
URL: https://github.com/apache/hadoop-ozone/pull/50#discussion_r337339047
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -893,17 +893,18 @@ private OmMultipartInfo createMultipartInfo(OmKeyArgs 
keyArgs,
   // Not checking if there is an already key for this in the keyTable, as
   // during final complete multipart upload we take care of this.
 
-
+  long currentTime = Time.now();
   Map partKeyInfoMap = new HashMap<>();
   OmMultipartKeyInfo multipartKeyInfo = new OmMultipartKeyInfo(
-  multipartUploadID, partKeyInfoMap);
+  multipartUploadID, currentTime, keyArgs.getType(),
 
 Review comment:
   For write requests old code is not used anymore.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #53: HDDS-2330. Random key generator can get stuck

2019-10-21 Thread GitBox
adoroszlai commented on issue #53: HDDS-2330. Random key generator can get stuck
URL: https://github.com/apache/hadoop-ozone/pull/53#issuecomment-544820234
 
 
   Thanks @arp7 for reviewing and merging this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.10.0 (RC0)

2019-10-21 Thread Konstantin Shvachko
+1 on RC0.
- Verified signatures
- Built from sources
- Ran unit tests for new features
- Checked artifacts on Nexus, made sure the sources are present.

Thanks
--Konstantin


On Wed, Oct 16, 2019 at 6:01 PM Jonathan Hung  wrote:

> Hi folks,
>
> This is the first release candidate for the first release of Apache Hadoop
> 2.10 line. It contains 361 fixes/improvements since 2.9 [1]. It includes
> features such as:
>
> - User-defined resource types
> - Native GPU support as a schedulable resource type
> - Consistent reads from standby node
> - Namenode port based selective encryption
> - Improvements related to rolling upgrade support from 2.x to 3.x
>
> The RC0 artifacts are at: http://home.apache.org/~jhung/hadoop-2.10.0-RC0/
>
> RC tag is release-2.10.0-RC0.
>
> The maven artifacts are hosted here:
> https://repository.apache.org/content/repositories/orgapachehadoop-1241/
>
> My public key is available here:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The vote will run for 5 weekdays, until Wednesday, October 23 at 6:00 pm
> PDT.
>
> Thanks,
> Jonathan Hung
>
> [1]
>
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HDFS%2C%20YARN%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.10.0%20AND%20fixVersion%20not%20in%20(2.9.2%2C%202.9.1%2C%202.9.0)
>


[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #59: HDDS-2327. Provide new Freon test to test Ratis pipeline with pure XceiverClientRatis

2019-10-21 Thread GitBox
adoroszlai commented on a change in pull request #59: HDDS-2327. Provide new 
Freon test to test Ratis pipeline with pure XceiverClientRatis
URL: https://github.com/apache/hadoop-ozone/pull/59#discussion_r337337965
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkGenerator.java
 ##
 @@ -0,0 +1,163 @@
+package org.apache.hadoop.ozone.freon;
+
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumData;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.DatanodeBlockID;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.WriteChunkRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol;
+import org.apache.hadoop.ozone.OzoneSecurityUtil;
+import org.apache.hadoop.ozone.common.Checksum;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator to use pure datanode XCeiver interface.
+ */
+@Command(name = "dcg",
+aliases = "datanode-chunk-generator",
+description = "Create as many chunks as possible with pure XCeiverClient.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class DatanodeChunkGenerator extends BaseFreonGenerator implements
+Callable {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeChunkGenerator.class);
+
+  @Option(names = {"-s", "--size"},
+  description = "Size of the generated chunks (in bytes)",
+  defaultValue = "1024")
+  private int chunkSize;
+
+  @Option(names = {"-l", "--pipeline"},
+  description = "Pipeline to use. By default the first RATIS/THREE "
+  + "pipeline will be used.",
+  defaultValue = "")
+  private String pipelineId;
+
+  private XceiverClientSpi xceiverClientSpi;
+
+  private Timer timer;
+
+  private ByteString dataToWrite;
+  private ChecksumData checksumProtobuf;
+
+  @Override
+  public Void call() throws Exception {
+
+init();
+
+OzoneConfiguration ozoneConf = createOzoneConfiguration();
+if (OzoneSecurityUtil.isSecurityEnabled(ozoneConf)) {
+  throw new IllegalArgumentException(
+  "Datanode chunk generator is not supported in secure environment");
+}
+
+StorageContainerLocationProtocol scmLocationClient =
+createStorageContainerLocationClient(ozoneConf);
+List pipelines = scmLocationClient.listPipelines();
+
+Pipeline pipeline;
+if (pipelineId != null && pipelineId.length() > 0) {
+  pipeline = pipelines.stream()
+  .filter(p -> p.getId().toString().equals(pipelineId))
+  .findFirst()
+  .orElseThrow(() -> new IllegalArgumentException(
+  "Pipeline ID is defined, but there is no such pipeline: "
+  + pipelineId));
+
+} else {
+  pipeline = pipelines.stream()
+  .filter(p -> p.getFactor() == ReplicationFactor.THREE)
+  .findFirst()
+  .orElseThrow(() -> new IllegalArgumentException(
+  "Pipeline ID is defined, and no pipeline has been found with "
+  + "factor=THREE"));
+  LOG.info("Using pipeline {}", pipeline.getId().toString());
+
+}
+
+xceiverClientSpi =
+new XceiverClientManager(ozoneConf).acquireClient(pipeline);
+
+timer = getMetrics().timer("file-create");
 
 Review comment:
   I think `chunk-create` might be better here.  (`file-create` is used in 
`HadoopFsGenerator`)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

--

[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #59: HDDS-2327. Provide new Freon test to test Ratis pipeline with pure XceiverClientRatis

2019-10-21 Thread GitBox
adoroszlai commented on a change in pull request #59: HDDS-2327. Provide new 
Freon test to test Ratis pipeline with pure XceiverClientRatis
URL: https://github.com/apache/hadoop-ozone/pull/59#discussion_r337345643
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkGenerator.java
 ##
 @@ -0,0 +1,163 @@
+package org.apache.hadoop.ozone.freon;
+
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumData;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.DatanodeBlockID;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.WriteChunkRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol;
+import org.apache.hadoop.ozone.OzoneSecurityUtil;
+import org.apache.hadoop.ozone.common.Checksum;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator to use pure datanode XCeiver interface.
+ */
+@Command(name = "dcg",
+aliases = "datanode-chunk-generator",
+description = "Create as many chunks as possible with pure XCeiverClient.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class DatanodeChunkGenerator extends BaseFreonGenerator implements
+Callable {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeChunkGenerator.class);
+
+  @Option(names = {"-s", "--size"},
+  description = "Size of the generated chunks (in bytes)",
+  defaultValue = "1024")
+  private int chunkSize;
+
+  @Option(names = {"-l", "--pipeline"},
+  description = "Pipeline to use. By default the first RATIS/THREE "
+  + "pipeline will be used.",
+  defaultValue = "")
+  private String pipelineId;
+
+  private XceiverClientSpi xceiverClientSpi;
+
+  private Timer timer;
+
+  private ByteString dataToWrite;
+  private ChecksumData checksumProtobuf;
+
+  @Override
+  public Void call() throws Exception {
+
+init();
+
+OzoneConfiguration ozoneConf = createOzoneConfiguration();
+if (OzoneSecurityUtil.isSecurityEnabled(ozoneConf)) {
+  throw new IllegalArgumentException(
+  "Datanode chunk generator is not supported in secure environment");
+}
+
+StorageContainerLocationProtocol scmLocationClient =
+createStorageContainerLocationClient(ozoneConf);
+List pipelines = scmLocationClient.listPipelines();
+
+Pipeline pipeline;
+if (pipelineId != null && pipelineId.length() > 0) {
+  pipeline = pipelines.stream()
+  .filter(p -> p.getId().toString().equals(pipelineId))
+  .findFirst()
+  .orElseThrow(() -> new IllegalArgumentException(
+  "Pipeline ID is defined, but there is no such pipeline: "
+  + pipelineId));
+
+} else {
+  pipeline = pipelines.stream()
+  .filter(p -> p.getFactor() == ReplicationFactor.THREE)
+  .findFirst()
+  .orElseThrow(() -> new IllegalArgumentException(
+  "Pipeline ID is defined, and no pipeline has been found with "
+  + "factor=THREE"));
+  LOG.info("Using pipeline {}", pipeline.getId().toString());
+
+}
+
+xceiverClientSpi =
+new XceiverClientManager(ozoneConf).acquireClient(pipeline);
+
+timer = getMetrics().timer("file-create");
+
+byte[] data = RandomStringUtils.randomAscii(chunkSize)
+.getBytes(StandardCharsets.UTF_8);
+
+dataToWrite = ByteString.copyFrom(data);
+
+Checksum checksum = new Checksum(ChecksumType.CRC32, chunkSize);
+checksumProtobuf = checksum.computeChecksum(data).getProtoBufMessage();
+
+runTests(this::writeChunk);
+
+return null;
+  }
+
+  private void writeChunk(long stepNo)
+  throws Exception {
+
+//Always use this fake blockid.
+DatanodeBlockID blockId = DatanodeBlock

[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #59: HDDS-2327. Provide new Freon test to test Ratis pipeline with pure XceiverClientRatis

2019-10-21 Thread GitBox
adoroszlai commented on a change in pull request #59: HDDS-2327. Provide new 
Freon test to test Ratis pipeline with pure XceiverClientRatis
URL: https://github.com/apache/hadoop-ozone/pull/59#discussion_r337338057
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkGenerator.java
 ##
 @@ -0,0 +1,163 @@
+package org.apache.hadoop.ozone.freon;
+
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumData;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.DatanodeBlockID;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.WriteChunkRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol;
+import org.apache.hadoop.ozone.OzoneSecurityUtil;
+import org.apache.hadoop.ozone.common.Checksum;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator to use pure datanode XCeiver interface.
+ */
+@Command(name = "dcg",
+aliases = "datanode-chunk-generator",
+description = "Create as many chunks as possible with pure XCeiverClient.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class DatanodeChunkGenerator extends BaseFreonGenerator implements
+Callable {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeChunkGenerator.class);
+
+  @Option(names = {"-s", "--size"},
+  description = "Size of the generated chunks (in bytes)",
+  defaultValue = "1024")
+  private int chunkSize;
+
+  @Option(names = {"-l", "--pipeline"},
 
 Review comment:
   `-p` is already taken for `--prefix`:
   
   
https://github.com/apache/hadoop-ozone/blob/a1fe75c40260ff85e4480a4ce57b9040b95fee3b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java#L99
   
   Maybe `-P` instead of `-l`?
   
   ```suggestion
 @Option(names = {"-P", "--pipeline"},
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-21 Thread GitBox
ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS 
pipeline creation and destroy through heartbeat commands
URL: https://github.com/apache/hadoop-ozone/pull/29#discussion_r336862716
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -379,6 +418,44 @@ public void deactivatePipeline(PipelineID pipelineID)
 stateManager.deactivatePipeline(pipelineID);
   }
 
+  /**
+   * Wait a pipeline to be OPEN.
+   *
+   * @param pipelineID ID of the pipeline to wait for.
+   * @param timeoutwait timeout, millisecond
+   * @throws IOException in case of any Exception, such as timeout
+   */
+  @Override
+  public void waitPipelineReady(PipelineID pipelineID, long timeout)
+  throws IOException {
+Pipeline pipeline;
+try {
+  pipeline = stateManager.getPipeline(pipelineID);
+} catch (PipelineNotFoundException e) {
+  throw new IOException(String.format("Pipeline %s cannot be found",
+  pipelineID));
+}
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-21 Thread GitBox
ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS 
pipeline creation and destroy through heartbeat commands
URL: https://github.com/apache/hadoop-ozone/pull/29#discussion_r336863445
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ClosePipelineCommandHandler.java
 ##
 @@ -0,0 +1,121 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.statemachine.commandhandler;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.ClosePipelineCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.ozone.container.common.statemachine
+.SCMConnectionManager;
+import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.container.common.transport.server
+.XceiverServerSpi;
+import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.protocol.commands.ClosePipelineCommand;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.util.Time;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * Handler for close pipeline command received from SCM.
+ */
+public class ClosePipelineCommandHandler implements CommandHandler {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ClosePipelineCommandHandler.class);
+
+  private AtomicLong invocationCount = new AtomicLong(0);
+  private long totalTime;
+
+  /**
+   * Constructs a closePipelineCommand handler.
+   */
+  public ClosePipelineCommandHandler() {
+  }
+
+  /**
+   * Handles a given SCM command.
+   *
+   * @param command   - SCM Command
+   * @param ozoneContainer- Ozone Container.
+   * @param context   - Current Context.
+   * @param connectionManager - The SCMs that we are talking to.
+   */
+  @Override
+  public void handle(SCMCommand command, OzoneContainer ozoneContainer,
+  StateContext context, SCMConnectionManager connectionManager) {
+invocationCount.incrementAndGet();
+final long startTime = Time.monotonicNow();
+final DatanodeDetails dn = context.getParent().getDatanodeDetails();
+final ClosePipelineCommandProto closeCommand =
+((ClosePipelineCommand)command).getProto();
+final HddsProtos.PipelineID pipelineID = closeCommand.getPipelineID();
+
+try {
+  XceiverServerSpi server = ozoneContainer.getWriteChannel();
+  server.removeGroup(pipelineID);
+  context.getParent().triggerHeartbeat();
 
 Review comment:
   Trigger heartbeat can be removed here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-21 Thread GitBox
ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS 
pipeline creation and destroy through heartbeat commands
URL: https://github.com/apache/hadoop-ozone/pull/29#discussion_r336863557
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/OneReplicaPipelineSafeModeRule.java
 ##
 @@ -47,22 +41,24 @@
  * replica available for read when we exit safe mode.
  */
 public class OneReplicaPipelineSafeModeRule extends
-SafeModeExitRule {
+SafeModeExitRule {
 
   private static final Logger LOG =
   LoggerFactory.getLogger(OneReplicaPipelineSafeModeRule.class);
 
   private int thresholdCount;
   private Set reportedPipelineIDSet = new HashSet<>();
-  private final PipelineManager pipelineManager;
-  private int currentReportedPipelineCount = 0;
+  private Set oldPipelineIDSet;
+  private int oldPipelineReportedCount = 0;
+  private int oldPipelineThresholdCount = 0;
+  private int newPipelineThresholdCount = 0;
 
 Review comment:
   Right. Will change it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-21 Thread GitBox
ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS 
pipeline creation and destroy through heartbeat commands
URL: https://github.com/apache/hadoop-ozone/pull/29#discussion_r336865673
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -318,15 +318,6 @@
   datanode periodically send container report to SCM. Unit could be
   defined with postfix (ns,ms,s,m,h,d)
   
-  
-hdds.command.status.report.interval
-6ms
-OZONE, CONTAINER, MANAGEMENT
-Time interval of the datanode to send status of command
-  execution. Each datanode periodically the execution status of commands
-  received from SCM to SCM. Unit could be defined with postfix
-  (ns,ms,s,m,h,d)
-  
 
 Review comment:
   It's intentional.  This property shows up two times in ozone-default.xml 
with different default value.  I keep the one with the same default value as 
defined in java file. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-21 Thread GitBox
ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS 
pipeline creation and destroy through heartbeat commands
URL: https://github.com/apache/hadoop-ozone/pull/29#discussion_r336866036
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -379,6 +418,44 @@ public void deactivatePipeline(PipelineID pipelineID)
 stateManager.deactivatePipeline(pipelineID);
   }
 
+  /**
+   * Wait a pipeline to be OPEN.
+   *
+   * @param pipelineID ID of the pipeline to wait for.
+   * @param timeoutwait timeout, millisecond
+   * @throws IOException in case of any Exception, such as timeout
+   */
+  @Override
+  public void waitPipelineReady(PipelineID pipelineID, long timeout)
+  throws IOException {
+Pipeline pipeline;
+try {
+  pipeline = stateManager.getPipeline(pipelineID);
+} catch (PipelineNotFoundException e) {
+  throw new IOException(String.format("Pipeline %s cannot be found",
+  pipelineID));
+}
+
+boolean ready;
+long st = Time.monotonicNow();
+if (timeout == 0) {
+  timeout = pipelineWaitDefaultTimeout;
+}
+for(ready = pipeline.isOpen();
+!ready && Time.monotonicNow() - st < timeout;
+ready = pipeline.isOpen()) {
+  try {
+Thread.sleep((long)1000);
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-21 Thread GitBox
ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS 
pipeline creation and destroy through heartbeat commands
URL: https://github.com/apache/hadoop-ozone/pull/29#discussion_r336866775
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/SCMSafeModeManager.java
 ##
 @@ -190,6 +196,18 @@ public synchronized void validateSafeModeExitRules(String 
ruleName,
*/
   @VisibleForTesting
   public void exitSafeMode(EventPublisher eventQueue) {
+// Wait a while for as many as new pipelines to be ready
+if (createPipelineInSafeMode) {
+  long sleepTime = config.getTimeDuration(
+  HddsConfigKeys.HDDS_SCM_WAIT_TIME_AFTER_SAFE_MODE_EXIT,
+  HddsConfigKeys.HDDS_SCM_WAIT_TIME_AFTER_SAFE_MODE_EXIT_DEFAULT,
+  TimeUnit.MILLISECONDS);
+  try {
+Thread.sleep(sleepTime);
 
 Review comment:
   This is for unit test. Some existing unit tests will fail because they want 
to write data once the miniOzone cluster exits the safemode and ready. But not 
all pipelines are ready to serve the write request after safemode.  
   Maybe I can move this logic to miniOzoneCluster. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-21 Thread GitBox
ChenSammi commented on a change in pull request #29: HDDS-2034. Async RATIS 
pipeline creation and destroy through heartbeat commands
URL: https://github.com/apache/hadoop-ozone/pull/29#discussion_r336866911
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -127,9 +141,16 @@ public void setPipelineProvider(ReplicationType 
replicationType,
 pipelineFactory.setProvider(replicationType, provider);
   }
 
+  public Set getOldPipelineIdSet() {
+return oldRatisThreeFactorPipelineIDSet;
+  }
+
   private void initializePipelineState() throws IOException {
 if (pipelineStore.isEmpty()) {
   LOG.info("No pipeline exists in current db");
+  if (pipelineAvailabilityCheck && createPipelineInSafemode) {
+startPipelineCreator();
 
 Review comment:
   Will check it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2338) Avoid buffer copy while submitting write chunk request in Ozone Client

2019-10-21 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDDS-2338:
-

 Summary: Avoid buffer copy while submitting write chunk request in 
Ozone Client
 Key: HDDS-2338
 URL: https://issues.apache.org/jira/browse/HDDS-2338
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Shashikant Banerjee
 Fix For: 0.5.0


Based, on the config value of "ozone.UnsafeByteOperations.enabled" which by 
default is set to true we used to avoid buffer copy while submitting write 
chunk request to Ratis. With recent changes around byteStringConversion 
utility, seems like the config is never passed to BlockOutputStream and it 
results in buffer copying every time there is a byteBuffer to byteString 
conversion is done in ozone client. This Jira is to pass the appropriate config 
value so that buffer copy can be avoided.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on issue #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-21 Thread GitBox
swagle commented on issue #23: HDDS-1868. Ozone pipelines should be marked as 
ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#issuecomment-544390434
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #11: HDDS-2291. Acceptance tests for OM HA.

2019-10-21 Thread GitBox
elek commented on issue #11: HDDS-2291. Acceptance tests for OM HA.
URL: https://github.com/apache/hadoop-ozone/pull/11#issuecomment-544410792
 
 
   Thank you very much the patch @hanishakoneru . Overall I am very to happy 
have more HA tests  with the robot framework and I would be happy to commit it 
(after clean builds).
   
   _Personally_ I would prefer to use a different approach, but it's only 
because I may have different thinking. It may not be better or worse. The only 
thing what I would like to do here is the explain my view, just because this is 
the fan part: to understand the thinking of each other.
   
   __1. The level of the tests__
   
   To run acceptance test we need to solve two problems:
   
1.) Create a running ozone cluster (and may restart services during the 
tests)
2.) Execute commands / check the results (run tests + assert)
   
   Currently these two roles/levels are separated. 
   
   The second one is implemented by the [robot 
tests](https://github.com/apache/hadoop-ozone/tree/master/hadoop-ozone/dist/src/main/smoketest)
 but the (existing) robot tests don't include any logic to start (or restart) 
services.
   
   The environments are mainly defined with docker-compose files and the logic 
to start them is defined by __shell scripts__ (for example 
[this](https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/dist/src/main/compose/ozone/test.sh)
 is the simplest one)
   
   The two levels/roles are separated.
   
   __ 2. the flexibility __
   
   The main advantage of this approach that you can run the tests in different 
environments. For example I can replace the __shell__ script based cluster 
creation process with anything else.

1. I can create kubernetes clusters and execute the same robot tests inside.
2. Anybody can execute the same robot tests in any commercial Hadoop/Ozone 
distribution
   
   __ 3. blockade __
   
   Blockade based tests are slightly different. They do both 1 (cluster 
creation) and 2 (test + assertion). Mainly because they are more interested 
about the environment setup (creating cluster, shutting down nodes, etc.).
   
   They do all the cluster set up / tear down based on docker-compose and the 
logic is defined in python scripts.
   
   __ 4. docker + ssh __
   
   This patch follows a different approach. Instead of using docker-compose to 
start/stop/restart services/nodes it installs an additional ssh daemon inside 
the containers to make it possible to restart the jvm process instead of the 
containers. (docker-compose is used to start/stop services and ssh daemons are 
used to restart)
   
   Usually this is not the way which is suggested to use in containerized 
environments. With docker usually it's easier to restart the containers and run 
only one process per container (and it provides better separation and easier 
management).
   
   __ 5. this patch __
   
   But the previous approach (using docker-compose to start / stop instead of 
ssh) is not portable at all. It can't be started inside kubernetes with little 
effort(for example). 
   
   On the other hand this *patch can be used very easily* in other environments 
as the "service restart" part of the environment management is included (with 
the help of ssh).
   
   **Summary**:
   
* This is a slightly different approach what we followed in the normal 
tests and not the mainstream usage of the containers
* But it's very effective and has some clear advantages (easier to re-use 
tests in different env)
* I have ideas how can it be done in a different way but they have 
different drawbacks (and different advantages)
   
   With other worlds: if we separate the _environment creation_ from the _test 
definitions_ where should we put the restart functionality to. You put it to 
the place where we have the _test definition_, I described a system where it 
can be put to the place where we have the _environment creation_.
   
   I think both approach is acceptable, __I will commit this one after a green 
acceptance test run__. 
   
   (And we can continue the thinking about how these tests can be evolved. For 
example: Do we need to separate these kind of the tests and create more tests 
where we restart clusters?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #11: HDDS-2291. Acceptance tests for OM HA.

2019-10-21 Thread GitBox
elek commented on issue #11: HDDS-2291. Acceptance tests for OM HA.
URL: https://github.com/apache/hadoop-ozone/pull/11#issuecomment-544412609
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14917) Change the ICON of "Decommissioned & dead" datanode on "dfshealth.html"

2019-10-21 Thread Xieming Li (Jira)
Xieming Li created HDFS-14917:
-

 Summary: Change the ICON of "Decommissioned & dead" datanode on 
"dfshealth.html"
 Key: HDFS-14917
 URL: https://issues.apache.org/jira/browse/HDFS-14917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ui
Reporter: Xieming Li
 Attachments: image-2019-10-21-17-49-10-635.png, 
image-2019-10-21-17-49-58-759.png, image-2019-10-21-18-03-53-914.png, 
image-2019-10-21-18-04-52-405.png, image-2019-10-21-18-05-19-160.png, 
image-2019-10-21-18-13-01-884.png, image-2019-10-21-18-13-54-427.png

This is a UI change proposal:
 The icon of "Decommissioned & dead" datanode could be improved.

It can be changed from    
!image-2019-10-21-18-05-19-160.png|width=31,height=28! to   
!image-2019-10-21-18-04-52-405.png|width=32,height=29! so that,
 #  icon " !image-2019-10-21-18-13-01-884.png|width=26,height=25! " can be used 
for all status starts with "decommission" on dfshealth.html, 
 #  icon " !image-2019-10-21-18-13-01-884.png|width=26,height=25! " can be 
differentiated with icon " !image-2019-10-21-18-13-54-427.png! " on 
federationhealth.html

|*DataNode Infomation Legend*
 dfshealth.html#tab-datanode 
(now)|!image-2019-10-21-17-49-10-635.png|width=516,height=55!|
|*DataNode* *Infomation* *Legend*
  dfshealth.html#tab-datanode 
(proposed)|!image-2019-10-21-18-03-53-914.png|width=589,height=60!|
|*NameService Legend*
 
federationhealth.htm#tab-namenode|!image-2019-10-21-17-49-58-759.png|width=445,height=43!|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14918) Remove useless getRedundancyThread from BlockManagerTestUtil

2019-10-21 Thread Fei Hui (Jira)
Fei Hui created HDFS-14918:
--

 Summary: Remove useless getRedundancyThread from 
BlockManagerTestUtil
 Key: HDFS-14918
 URL: https://issues.apache.org/jira/browse/HDFS-14918
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Fei Hui


Remove the dead code, it's useless.
{code}
 /**
  * @return redundancy monitor thread instance from block manager.
  */
 public static Daemon getRedundancyThread(final BlockManager blockManager) {
   return blockManager.getRedundancyThread();
 }
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14919) Provide Non DFS Used per disk in DataNode UI

2019-10-21 Thread Lisheng Sun (Jira)
Lisheng Sun created HDFS-14919:
--

 Summary: Provide Non DFS Used per disk in DataNode UI
 Key: HDFS-14919
 URL: https://issues.apache.org/jira/browse/HDFS-14919
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Lisheng Sun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/

[Oct 20, 2019 6:28:30 AM] (ayushsaxena) HADOOP-16662. Remove unnecessary 
InnerNode check in




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.namenode.TestNameNodeHttpServerXFrame 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [236K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/481/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]

[jira] [Created] (HDDS-2339) Add OzoneManager to MiniOzoneChaosCluster

2019-10-21 Thread Mukul Kumar Singh (Jira)
Mukul Kumar Singh created HDDS-2339:
---

 Summary: Add OzoneManager to MiniOzoneChaosCluster
 Key: HDDS-2339
 URL: https://issues.apache.org/jira/browse/HDDS-2339
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: om
Reporter: Mukul Kumar Singh


This jira proposes to add OzoneManager to MiniOzoneChaosCluster with OzoneHA 
implementation done. This will help in discovering bugs in Ozone Manager HA



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #11: HDDS-2291. Acceptance tests for OM HA.

2019-10-21 Thread GitBox
anuengineer commented on issue #11: HDDS-2291. Acceptance tests for OM HA.
URL: https://github.com/apache/hadoop-ozone/pull/11#issuecomment-544595437
 
 
   @elek  This is a brilliant comment. Can you please add this comment to the 
readme in the compose directory *and* to the wiki pages so that others are 
aware of this. It is this kind of context which is often missing when you read 
code. Thank you taking time out to explain the thought process.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #12: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka

2019-10-21 Thread GitBox
anuengineer commented on issue #12: HDDS-2206. Separate handling for 
OMException and IOException in the Ozone Manager. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop-ozone/pull/12#issuecomment-544597123
 
 
   I thought I asked for the config parameter. I really think we should have 
the ability to swtich this off if needed. I would like to revert this and 
commit with the configuration parameter, that disables this if neeed. In fact, 
my preference is to enable this if and only if it is needed.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer removed a comment on issue #12: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka

2019-10-21 Thread GitBox
anuengineer removed a comment on issue #12: HDDS-2206. Separate handling for 
OMException and IOException in the Ozone Manager. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop-ozone/pull/12#issuecomment-544597123
 
 
   I thought I asked for the config parameter. I really think we should have 
the ability to swtich this off if needed. I would like to revert this and 
commit with the configuration parameter, that disables this if neeed. In fact, 
my preference is to enable this if and only if it is needed.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
anuengineer commented on a change in pull request #58: HDDS-2293. Create a new 
CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337126332
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
 
 Review comment:
   Should we now move to JDK 11 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
anuengineer commented on a change in pull request #58: HDDS-2293. Create a new 
CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337126332
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
 
 Review comment:
   Should we now move to JDK 11 ?, Not related to this JIRA. But something that 
came to my mind/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
anuengineer commented on a change in pull request #58: HDDS-2293. Create a new 
CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337126835
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
 
 Review comment:
   This, let us move away. Almost impossible to find the downloads these days, 
and even if you do, i think you have compile yourself.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337127698
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
 
 Review comment:
   @anuengineer I think we can should that thread and it will require some work.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #54: HDDS-2281. ContainerStateMachine#handleWriteChunk should ignore close container exception

2019-10-21 Thread GitBox
anuengineer commented on issue #54: HDDS-2281. 
ContainerStateMachine#handleWriteChunk should ignore close container exception
URL: https://github.com/apache/hadoop-ozone/pull/54#issuecomment-544608593
 
 
   @bshashikant can we please fill up the JIRA template ? that helps people who 
read this JIRA and understand what it is about. I was reading the JIRA 
description and the patch and not able to make a head or tail about it.
   
   @mukul1987 when you commit or review can you please comment about this ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #54: HDDS-2281. ContainerStateMachine#handleWriteChunk should ignore close container exception

2019-10-21 Thread GitBox
anuengineer commented on a change in pull request #54: HDDS-2281. 
ContainerStateMachine#handleWriteChunk should ignore close container exception
URL: https://github.com/apache/hadoop-ozone/pull/54#discussion_r337130028
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -460,6 +460,10 @@ private ExecutorService getCommandExecutor(
 LOG.error(gid + ": writeChunk writeStateMachineData failed: 
blockId"
 + write.getBlockID() + " logIndex " + entryIndex + " chunkName 
"
 + write.getChunkData().getChunkName() + e);
+metrics.incNumWriteDataFails();
+// write chunks go in parallel. It's possible that one write chunk
+// see the stateMachine is marked unhealthy by other parallel 
thread.
+stateMachineHealthy.set(false);
 
 Review comment:
   So question; if a thread has marked the container as unhealthy why should a 
write be successful at all ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #54: HDDS-2281. ContainerStateMachine#handleWriteChunk should ignore close container exception

2019-10-21 Thread GitBox
anuengineer commented on a change in pull request #54: HDDS-2281. 
ContainerStateMachine#handleWriteChunk should ignore close container exception
URL: https://github.com/apache/hadoop-ozone/pull/54#discussion_r337130438
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -460,6 +460,10 @@ private ExecutorService getCommandExecutor(
 LOG.error(gid + ": writeChunk writeStateMachineData failed: 
blockId"
 + write.getBlockID() + " logIndex " + entryIndex + " chunkName 
"
 + write.getChunkData().getChunkName() + e);
+metrics.incNumWriteDataFails();
+// write chunks go in parallel. It's possible that one write chunk
+// see the stateMachine is marked unhealthy by other parallel 
thread.
+stateMachineHealthy.set(false);
 
 Review comment:
   I know this patch is merged, but I have no way of understanding what this 
means -- so I appreciate some comments or feedback that explains what happens 
here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2340) Update RATIS snapshot version

2019-10-21 Thread Siddharth Wagle (Jira)
Siddharth Wagle created HDDS-2340:
-

 Summary: Update RATIS snapshot version
 Key: HDDS-2340
 URL: https://issues.apache.org/jira/browse/HDDS-2340
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: build
Affects Versions: 0.5.0
Reporter: Siddharth Wagle
Assignee: Siddharth Wagle
 Fix For: 0.5.0


Update RATIS version to incorporate fix that went into RATIS-707 among others.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel opened a new pull request #66: Circleci test

2019-10-21 Thread GitBox
vivekratnavel opened a new pull request #66: Circleci test
URL: https://github.com/apache/hadoop-ozone/pull/66
 
 
   ## What changes were proposed in this pull request?
   
   (Please fill in changes proposed in this fix)
   
   ## What is the link to the Apache JIRA
   
   (Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HDDS-. Fix a typo in YYY.)
   
   Please replace this section with the link to the Apache JIRA)
   
   ## How was this patch tested?
   
   (Please explain how this patch was tested. Ex: unit tests, manual tests)
   (If this patch involves UI changes, please attach a screen-shot; otherwise, 
remove this)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel closed pull request #66: Circleci test

2019-10-21 Thread GitBox
vivekratnavel closed pull request #66: Circleci test
URL: https://github.com/apache/hadoop-ozone/pull/66
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle opened a new pull request #67: HDDS-2340. Updated ratis.version to get latest snapshot.

2019-10-21 Thread GitBox
swagle opened a new pull request #67: HDDS-2340. Updated ratis.version to get 
latest snapshot.
URL: https://github.com/apache/hadoop-ozone/pull/67
 
 
   ## What changes were proposed in this pull request?
   Updated ratis.version in pom.xml
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2340
   
   ## How was this patch tested?
   Tested that the build succeeds, also verified a failing unit tets works 
locally, with the patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



How should we do about dependency update?

2019-10-21 Thread Wei-Chiu Chuang
Hi Hadoop developers,

I've always had this question and I don't know the answer.

For the last few months I finally spent time to deal with the vulnerability
reports from our internal dependency check tools.

Say in HADOOP-16152 
we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
cherrypick the fix into all lower releases?
This is not a trivial change, and it breaks downstreams like Tez. On the
other hand, it doesn't seem reasonable if I put this fix only in trunk, and
left older releases vulnerable. What's the expectation of downstream
applications w.r.t breaking compatibility vs fixing security issues?

Thoughts?


[GitHub] [hadoop-ozone] avijayanhwx opened a new pull request #68: HDDS-2320. Negative value seen for OM NumKeys Metric in JMX.

2019-10-21 Thread GitBox
avijayanhwx opened a new pull request #68: HDDS-2320. Negative value seen for 
OM NumKeys Metric in JMX.
URL: https://github.com/apache/hadoop-ozone/pull/68
 
 
   ## What changes were proposed in this pull request?
   
   After running teragen, I noticed that the NumKeys metric in OzoneManager JMX 
had a negative value. On investigation, it was seen that we decrease the 
numkeys metric while deleting any key (including directory), but do not 
increment NumKeys when we create a directory. To keep it consistent, the change 
proposes increasing NumKeys while creating directories as well. This 
essentially means that the "NumKeys" is a pure OM metric that tracks the number 
of entries it manages as opposed to the number of files that are managed by SCM 
and Ratis. 
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2320
   
   ## How was this patch tested?
   Manually tested on cluster. Added unit test. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #68: HDDS-2320. Negative value seen for OM NumKeys Metric in JMX.

2019-10-21 Thread GitBox
adoroszlai commented on a change in pull request #68: HDDS-2320. Negative value 
seen for OM NumKeys Metric in JMX.
URL: https://github.com/apache/hadoop-ozone/pull/68#discussion_r337167085
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequest.java
 ##
 @@ -349,6 +349,47 @@ public void testValidateAndUpdateCacheWithFilesInPath() 
throws Exception {
 
   }
 
+  @Test
+  public void testCreateDirectoryOMMetric()
+  throws Exception {
+String volumeName = "vol1";
+String bucketName = "bucket1";
+String keyName = RandomStringUtils.randomAlphabetic(5);
+for (int i =0; i< 3; i++) {
+  keyName += "/" + RandomStringUtils.randomAlphabetic(5);
+}
+
+// Add volume and bucket entries to DB.
+TestOMRequestUtils.addVolumeAndBucketToDB(volumeName, bucketName,
+omMetadataManager);
+
+OMRequest omRequest = createDirectoryRequest(volumeName, bucketName,
+OzoneFSUtils.addTrailingSlashIfNeeded(keyName));
+OMDirectoryCreateRequest omDirectoryCreateRequest =
+new OMDirectoryCreateRequest(omRequest);
+
+OMRequest modifiedOmRequest =
+omDirectoryCreateRequest.preExecute(ozoneManager);
+
+omDirectoryCreateRequest = new OMDirectoryCreateRequest(modifiedOmRequest);
+
+Assert.assertEquals(0L, omMetrics.getNumKeys());
+OMClientResponse omClientResponse =
+omDirectoryCreateRequest.validateAndUpdateCache(ozoneManager, 100L,
+ozoneManagerDoubleBufferHelper);
+
+Assert.assertTrue(omClientResponse.getOMResponse().getStatus()
 
 Review comment:
   Please use `assertEquals` instead of `assertTrue(... == ...)`.  It provides 
better feedback in case the assertion fails.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #68: HDDS-2320. Negative value seen for OM NumKeys Metric in JMX.

2019-10-21 Thread GitBox
avijayanhwx commented on a change in pull request #68: HDDS-2320. Negative 
value seen for OM NumKeys Metric in JMX.
URL: https://github.com/apache/hadoop-ozone/pull/68#discussion_r337175595
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequest.java
 ##
 @@ -349,6 +349,47 @@ public void testValidateAndUpdateCacheWithFilesInPath() 
throws Exception {
 
   }
 
+  @Test
+  public void testCreateDirectoryOMMetric()
+  throws Exception {
+String volumeName = "vol1";
+String bucketName = "bucket1";
+String keyName = RandomStringUtils.randomAlphabetic(5);
+for (int i =0; i< 3; i++) {
+  keyName += "/" + RandomStringUtils.randomAlphabetic(5);
+}
+
+// Add volume and bucket entries to DB.
+TestOMRequestUtils.addVolumeAndBucketToDB(volumeName, bucketName,
+omMetadataManager);
+
+OMRequest omRequest = createDirectoryRequest(volumeName, bucketName,
+OzoneFSUtils.addTrailingSlashIfNeeded(keyName));
+OMDirectoryCreateRequest omDirectoryCreateRequest =
+new OMDirectoryCreateRequest(omRequest);
+
+OMRequest modifiedOmRequest =
+omDirectoryCreateRequest.preExecute(ozoneManager);
+
+omDirectoryCreateRequest = new OMDirectoryCreateRequest(modifiedOmRequest);
+
+Assert.assertEquals(0L, omMetrics.getNumKeys());
+OMClientResponse omClientResponse =
+omDirectoryCreateRequest.validateAndUpdateCache(ozoneManager, 100L,
+ozoneManagerDoubleBufferHelper);
+
+Assert.assertTrue(omClientResponse.getOMResponse().getStatus()
 
 Review comment:
   Thanks @adoroszlai. Changed it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on issue #49: HDDS-2310. Add support to add ozone ranger plugin to Ozone Manager cl…

2019-10-21 Thread GitBox
vivekratnavel commented on issue #49: HDDS-2310. Add support to add ozone 
ranger plugin to Ozone Manager cl…
URL: https://github.com/apache/hadoop-ozone/pull/49#issuecomment-544652974
 
 
   @anuengineer The integration test failures are not related. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337151963
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
 
 Review comment:
   Typo: `rquirements` -> `requirements`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337181062
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337128595
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
 
 Review comment:
   Typo: `differenceis` -> `differences`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337154164
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
 
 Review comment:
   NIT: `The main contribution work

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337153668
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
 
 Review comment:
   NIT: Extra space in `We  use`

-

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337152242
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
 
 Review comment:
   NIT: `our` -> `your`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337160654
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337179934
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (

[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a new CONTRIBUTION.md for the new repository

2019-10-21 Thread GitBox
dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337181851
 
 

 ##
 File path: CONTRIBUTION.md
 ##
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+ * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+ * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+  * Unit Tests (JUnit / Java)
+  * Acceptance Tests (docker + robot framework)
+  * Blockade tests (python + blockade) 
+  * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (