dineshchitlangia commented on a change in pull request #58: HDDS-2293. Create a 
new CONTRIBUTION.md for the new repository
URL: https://github.com/apache/hadoop-ozone/pull/58#discussion_r337160654
 
 

 ##########
 File path: CONTRIBUTION.md
 ##########
 @@ -0,0 +1,174 @@
+Apache Hadoop Ozone Contribution guideline
+===
+
+Ozone is a part of the Apache Hadoop project. The bug tracking system for 
Ozone is under the [Apache Jira project named 
HDDS](https://issues.apache.org/jira/projects/HDDS/).
+
+If you are familiar with contributing to Apache Hadoop, then you already know 
everything you need to know to start filing Ozone bugs and submitting patches.
+
+If you have never contributed to Apache Hadoop before, then you may find it 
useful to read [How To 
Contribute](https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone).
+
+This document summarize the contribution process and defines the differenceis. 
 
+
+## What can I contribute?
+
+We welcome contributions of:
+
+ * **Code**. File a bug and submit a patch, or pick up any one of the 
unassigned Jiras.
+   * [Newbie Ozone jiras](https://s.apache.org/OzoneNewbieJiras)
+   * [All open and unassigned Ozone 
jiras](https://s.apache.org/OzoneUnassignedJiras)
+ * **Documentation Improvements**: You can submit improvements to either:
+     * Ozone website. Instructions are here: [Modifying the Ozone 
Website](https://cwiki.apache.org/confluence/display/HADOOP/Modifying+the+Ozone+Website)
+     * Developer docs. These are markdown files [checked into the Apache 
Hadoop Source 
tree](https://github.com/apache/hadoop-ozone/tree/master/hadoop-hdds/docs/content).
+ * The [wiki 
pages](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Contributor+Guide):
 Please contact us at hdfs-dev@hadoop.apache.org and we can provide you write 
access to the wiki.
+ * **Testing**: We always need help to improve our testing
+      * Unit Tests (JUnit / Java)
+      * Acceptance Tests (docker + robot framework)
+      * Blockade tests (python + blockade) 
+      * Performance: We have multiple type of load generator / benchmark tools 
(`ozone freon`, `ozone genesis`). Which can be used to test cluster and report 
problems.
+ * **Bug reports** pointing out broken functionality, docs, or suggestions for 
improvements are always welcome!
+ 
+## Who To Contact
+
+If you have any questions, please don't hesitate to contact
+
+ * in **mail**: use hdfs-dev@hadoop.apache.org.
+ * in **chat**: You can find the #ozone channel at the ASF slack. Invite link 
is [here](http://s.apache.org/slack-invite) 
+ * **meeting**: [We have weekly 
meetings](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls)
 which is open to anybody. Feel free to join and ask any questions
+    
+## Building from the source code
+
+### Requirements
+
+Requirements to compile the code:
+
+* Unix System
+* JDK 1.8
+* Maven 3.5 or later
+* Protocol Buffers 2.5
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+
+Additional requirements to run your first pseudo cluster:
+
+* docker
+* docker-compose
+
+Additional rquirements to execute different type of tests:
+
+* [Robot framework](https://robotframework.org/) (for executing acceptance 
tests)
+* docker-compose (to start pseudo cluster, also used for blockade and 
acceptance tests)
+* [blockade](https://pypi.org/project/blockade/) To execute network 
fault-injection testing.
+
+Optional dependencies:
+
+* [hugo](https://gohugo.io/) to include the documentation in the web ui.
+
+### Build the project
+
+The build is as simple as:
+
+```
+mvn clean install -DskipTests
+```
+
+And you can start our first cluster:
+
+```
+cd hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
+docker-compose up -d --scale datanode=3
+```
+
+### Helper scripts
+
+`hadoop-ozone/dev-support/checks` directory contains helper scripts to build 
and check your code. (Including findbugs and checkstyle). Use them if you don't 
know the exact maven gools / parameters.
+
+These scripts are executed by the CI servers, so it's always good to run them 
locally before creating a PR.
+
+### Maven build options:
+
+  * Use `-DskipShade` to exclude ozonefs jar file creation from the relase. 
It's way more faster, but you can't test Hadoop Compatible file system.
+  * Use `-DskipRecon` to exclude the Recon build (Web UI and monitoring) from 
the build. It saves about 2 additional minutes.
+  * Use `-Pdist` to build a distribution (Without this profile you won't have 
the final tar file)
+  * Use `-Pdocker-build` to build a docker image which includes Ozone
+  * Use `-Ddocker.image=repo/name` to define the name of your docker image
+  * USe `-Pdocker-push` to puseh the created docker image to the docker 
registry
+  
+## Contribute your modifications
+
+We  use github pull requests instead of uploading patches to JIRA. The main 
contribution workflow is the following:
+
+  1. Fork `apache/hadoop-ozone` github repository (first time)
+  2. Create a new Jira in HDDS project (eg. HDDS-1234)
+  3. Create a local branch for your contribution (eg. `git checkout -b 
HDDS-1234`)
+  4. Create your commits and push your branches to your personal fork.
+  5. Create a pull request on github UI 
+      * Please include the Jira link, problem description and testing 
instruction
+  6. Set the Jira to "Patch Available" state
+    
 
 Review comment:
   Add:
   7. Address any review comments if applicable by pushing new commits to the 
PR.
   8. When addressing review comments, there is no need to squash your commits. 
This makes it easy for reviewers to only review the incremental changes. The 
committer will take care to squash all your commits before merging to master.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to