Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.TestMultipleNNPortQOP hadoop.hdfs.TestAppendDifferentChecksum hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.yarn.sls.TestSLSRunner hadoop.yarn.client.api.impl.TestAMRMClient hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/diff-compile-javac-root.txt [488K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-mvnsite-root.txt [596K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-javadoc-root.txt [76K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [220K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [436K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [28K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/818/artifact/out/patch
Re: Hadoop HDFS project
Hello Vinay, Yes, you are not subscribed yet, your mail has not reached the mailing list. https://lists.apache.org/thread/xd0yy0dgf22lxdb4wysc3k0coyhc3wnx > I sent an email to Subscribe, but i didn't get any response. Please check All emails or Spam folder. I tried the same and got a response mail with the title "confirm subscribe to hdfs-dev@hadoop.apache.org". Thanks and Regards, Aditya Sharma On Mon, Oct 17, 2022 at 6:47 PM Vinay Kv wrote: > Hi Aditya, > Thanks a lot for the reply. > I sent an email to Subscribe, but i didn't get any response. > > I went through the contribution guide and successfully set up the hadoop > project on my local IDE. > > I went through the way to create a build, submit patches, and how to > release to understand the release flow. > > @hdfs-dev@hadoop.apache.org > > One thing I am struggling to understand is how we test the code in terms > of functionality? > How to contribute documents talks about existing unit test cases that have > to pass, and new test cases we will create if we are adding new > functionality. > My understanding from this is we have to do a functionality test, the only > way is to create a build and then test it. We cannot run the code without > creating a build. > > What is the best way to understand the code flow of HDFS? > I usually trigger a functionality, and debug the code to understand the > flow. Is there a way to debug a code to understand the flow and understand > the fixes. > Or is there any other way to understand the code flow? > Unit tests utilise mini HDFS clusters, can we use it to understand the > flow? > > For example: > if I have to understand the Namenode edit log functionality. I started > with the NameNode class, went through the Java doc, realised a few more > classes it is using and I am going through them. > But this approach is 1. taking time 2. i might be wrong in understanding > the doc OR doc might not have updated! > > Sorry if I have asked any lame questions. And Thank you in Advance for > your replies, time and patience. > > Thanks, > Vinay > > On Thu, 13 Oct 2022 at 11:42, Aditya Sharma > wrote: > >> Hello Vinay, >> >> Welcome to the Apache ecosystem! Glad to know that you want to contribute >> to Apache Hadoop HDFS. >> >> As I am not part of the Apache Hadoop community, I may not be able to >> help you with the technical aspect, but I would definitely be able to guide >> you on how you could get the help you need. >> >> Here are some of the documents that may help you contribute: >> >> Hadoop Contributor Guide >> https://cwiki.apache.org/confluence/x/FCFPBQ >> >> How to Contribute to Apache Hadoop >> https://cwiki.apache.org/confluence/x/iwQwB >> >> Here are the open issues, you could pick from: >> >> https://issues.apache.org/jira/browse/HDFS-16799?jql=project%20%3D%20HDFS%20AND%20status%20%3D%20Open%20AND%20resolution%20%3D%20Unresolved%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC >> >> Mailing list is a primary forum for communication, so if you’d like to >> contribute to HDFS please do subscribe to the HDFS developer mailing list. >> The HDFS developer mailing list is: hdfs-dev@hadoop.apache.org. >> How to subscribe? >> 1. Send an empty mail (Subject: Subscribe) to >> hdfs-dev-subscr...@hadoop.apache.org >> 2. You will receive a confirmation mail, just reply to confirm >> >> Feel free to ask your queries on the mailing list. >> >> I am looping in the HDFS developer mailing list, so that anyone from the >> community could come forward and help you. >> >> HTH >> >> Thanks and Regards, >> Aditya Sharma >> >> >> On Wed, Oct 12, 2022 at 3:42 PM Vinay Kv wrote: >> >>> Hi Aditya/Priya, >>> I am Vinay, I have around 9 years of experience in software development. >>> >>> I am looking to understand the code of HDFS so that I can contribute in >>> the future. >>> >>> I am reaching out to you guys to see if there is any head start I can >>> use, in terms of where to start and how to approach. >>> >>> Any starting point or any document to start will help a lot for me to >>> pick up. >>> >>> Thanks in advance for your help. >>> >>> PS: >>> Got your contact when I was going through the confluence of Hadoop >>> >>> -- >>> Regards, >>> Vinay >>> >>> >>> >> > > -- > Regards, > Vinay.K.V > > >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1017/ [Oct 17, 2022, 4:33:10 AM] (noreply) HADOOP-18493: upgrade jackson-databind to 2.12.7.1 (#5011). Contributed by PJ Fanning. [Oct 17, 2022, 4:44:25 AM] (noreply) HADOOP-18462. InstrumentedWriteLock should consider Reentrant case (#4919). Contributed by ZanderXu. [Oct 17, 2022, 10:56:15 AM] (noreply) HDFS-6874. Add GETFILEBLOCKLOCATIONS operation to HttpFS (#4750) [Oct 17, 2022, 5:10:47 PM] (noreply) HADOOP-18156. Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc (#4965) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org