[jira] [Created] (HDFS-16362) [FSO] Refactor isFileSystemOptimized usage in OzoneManagerUtils

2021-11-30 Thread Rakesh Radhakrishnan (Jira)
Rakesh Radhakrishnan created HDFS-16362: --- Summary: [FSO] Refactor isFileSystemOptimized usage in OzoneManagerUtils Key: HDFS-16362 URL: https://issues.apache.org/jira/browse/HDFS-16362

[jira] [Resolved] (HDFS-15253) Set default throttle value on dfs.image.transfer.bandwidthPerSec

2020-10-07 Thread Rakesh Radhakrishnan (Jira)
[ https://issues.apache.org/jira/browse/HDFS-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh Radhakrishnan resolved HDFS-15253. - Fix Version/s: 3.4.0 Resolution: Fixed > Set default throttle value

Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Rakesh Radhakrishnan
Thanks Sammi for getting this out! +1 (binding) * Verified signatures. * Built from source. * Deployed small non-HA un-secure cluster. * Verified basic Ozone file system. * Tried out a few basic Ozone shell commands - create, list, delete * Ran a few Freon benchmark

Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-13 Thread Rakesh Radhakrishnan
Thanks Brahma for getting this out! +1 (binding) Verified the following and looks fine to me. * Built from source with CentOS 7.4 and OpenJDK 1.8.0_232. * Deployed 3-node cluster. * Verified HDFS web UIs. * Tried out a few basic hdfs shell commands. * Ran sample Terasort,

Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-19 Thread Rakesh Radhakrishnan
+1 Rakesh On Fri, Sep 20, 2019 at 12:29 AM Aaron Fabbri wrote: > +1 (binding) > > Thanks to the Ozone folks for their efforts at maintaining good separation > with HDFS and common. I took a lot of heat for the unpopular opinion that > they should be separate, so I am glad the process has worke

Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-03 Thread Rakesh Radhakrishnan
+1, Thanks for the proposal. I am interested to participate in this project. Please include me as well in the project. Thanks, Rakesh On Tue, Sep 3, 2019 at 11:59 AM zhankun tang wrote: > +1 > > Thanks for Wangda's proposal. > > The submarine project is born within Hadoop, but not limited to H

Re: [VOTE] Merge Storage Policy Satisfier (SPS) [HDFS-10285] feature branch to trunk

2018-08-07 Thread Rakesh Radhakrishnan
+1 Thanks, Rakesh On Wed, Aug 1, 2018 at 12:08 PM, Uma Maheswara Rao G wrote: > Hi All, > > > > From the positive responses from JIRA discussion and no objections from > below DISCUSS thread [1], I am converting it to voting thread. > > > > Last couple of weeks we spent time on testing the fe

Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

2018-04-27 Thread Rakesh Radhakrishnan
Thanks Sammi for getting this out! +1 (binding) Verified the following and looks fine to me. * Built from source. * Deployed 3 node cluster with NameNode HA. * Verified HDFS web UIs. * Tried out HDFS shell commands. * Ran Mover, Balancer tools. * Ran sample MapReduc

Re: [VOTE] Adopt HDSL as a new Hadoop subproject

2018-03-27 Thread Rakesh Radhakrishnan
+1 for the sub-project idea. Thanks to everyone that contributed! Regards, Rakesh On Tue, Mar 27, 2018 at 4:46 PM, Jack Liu wrote: > +1 (non-binding) > > > On Tue, Mar 27, 2018 at 2:16 AM, Tsuyoshi Ozawa wrote: > > > +1(binding), > > > > - Tsuyoshi > > > > On Tue, Mar 20, 2018 at 14:21 Owen O

Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-20 Thread Rakesh Radhakrishnan
Thanks Andrew for getting this out ! +1 (non-binding) * Built from source on CentOS 7.3.1611, jdk1.8.0_111 * Deployed non-ha cluster and tested few EC file operations. * Ran basic shell commands(ls, mkdir, put, get, ec, dfsadmin). * Ran some sample jobs. * HDFS Namenode UI looks good. Thanks, Ra

Re: [VOTE] Release Apache Hadoop 2.8.2 (RC1)

2017-10-24 Thread Rakesh Radhakrishnan
Thanks Junping for getting this out. +1 (non-binding) * Built from source on CentOS 7.3.1611, jdk1.8.0_111 * Deployed 3 node cluster * Ran some sample jobs * Ran balancer * Operate HDFS from command line: ls, put, dfsadmin etc * HDFS Namenode UI looks good Thanks, Rakesh On Fri, Oct 20, 2017 a

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread Rakesh Radhakrishnan
Thanks Junping for getting this out. +1 (non-binding) * downloaded and built from source with jdk1.8.0_45 * deployed HDFS-HA cluster * ran some sample jobs * run balancer * executed basic dfs cmds Rakesh On Wed, Mar 22, 2017 at 8:30 PM, Jian He wrote: > +1 (binding) > > - built from source >

Re: How to setup local environment to run kerberos test cases.

2016-09-29 Thread Rakesh Radhakrishnan
May be its due to file permission issues or something else. The test uses MiniKdc, which is based on Apache Directory Server and is embedded in test cases. Could you share the complete logs of the failed test, I think you can look at the your machine/env location: $HADOOP_HOME/hadoop-hdfs-project/

Re: How to setup local environment to run kerberos test cases.

2016-09-29 Thread Rakesh Radhakrishnan
I hope the following documents will help you, it contains the details about the way to build and run hadoop test cases. Please take a look at it. https://github.com/apache/hadoop/blob/branch-2.7.3/BUILDING.txt http://hadoop.apache.org/docs/r2.7.3/hadoop-auth/BuildingIt.html Please give few more d

Re: HDFS Balancer Stuck after 10 Minz

2016-09-08 Thread Rakesh Radhakrishnan
Have you taken multiple thread dumps (jstack) and observed the operations which are performing during this period of time. Perhaps there could be high chance of searching for data blocks which it can move around to balance the cluster. Could you tell me the used space and available space values. H

Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-31 Thread Rakesh Radhakrishnan
Thanks for getting this out. +1 (non-binding) - downloaded and built tarball from source - deployed HDFS-HA cluster and tested few EC file operations - executed few hdfs commands including EC commands - viewed basic UI - ran some of the sample jobs Best Regards, Rakesh Intel On Thu, Sep 1, 201

Re: [DISCUSS] Retire BKJM from trunk?

2016-07-27 Thread Rakesh Radhakrishnan
If I remember correctly, Huawei also adopted QJM component. I hope @Vinay might have discussed internally in Huawei before starting this e-mail discussion thread. I'm +1, for removing the bkjm contrib from the trunk code. Also, there are quite few open sub-tasks under HDFS-3399 umbrella jira, whic

Re: Improving recovery performance for degraded reads

2016-07-27 Thread Rakesh Radhakrishnan
can > display to the client, do you think stripping would still help ? > Is there a possibility that since I know that all the segments of the HD > image would always be read together, by stripping and distributing it on > different nodes, I am ignoring its special/temporal localit

Re: [VOTE] Release Apache Hadoop 2.7.3 RC0

2016-07-26 Thread Rakesh Radhakrishnan
Thank you Vinod. +1 (non-binding) - downloaded and built from source - deployed HDFS-HA cluster and tested few switching behaviors - executed few hdfs commands from command line - viewed basic UI - ran HDFS/Common unit tests - checked LICENSE and NOTICE files Regards, Rakesh Intel On Tue, Jul 2

Re: Improving recovery performance for degraded reads

2016-07-22 Thread Rakesh Radhakrishnan
tting read requests to fetch all the 'k' chunks(belonging to the same stripe as the failed chunk) from k data nodes and perform decoding to rebuild the lost data chunk at the client side. Regards, Rakesh On Fri, Jul 22, 2016 at 5:43 PM, Rakesh Radhakrishnan wrote: > Hi Roy, > > Th

Re: Improving recovery performance for degraded reads

2016-07-22 Thread Rakesh Radhakrishnan
Hi Roy, Thanks for the interest in hdfs erasure coding feature and helping us in making the feature more attractive to the users by sharing performance improvement ideas. Presently, the reconstruction work has been implemented in a centralized manner in which the reconstruction task will be given

Re: HDFS Erasuring Coding Block placement policy related reconstruction work not scheduled appropriately

2016-06-09 Thread Rakesh Radhakrishnan
Thanks Rui for reporting this. With "RS-DEFAULT-6-3-64k EC policy" EC file will have 6 data blocks and 3 parity blocks. Like you described initially the cluster has 5 racks, so the first 5 data blocks will use those racks. Now while adding rack-6, reconstruction task will be scheduled for placing