Rakesh Radhakrishnan created HDFS-16362:
---
Summary: [FSO] Refactor isFileSystemOptimized usage in
OzoneManagerUtils
Key: HDFS-16362
URL: https://issues.apache.org/jira/browse/HDFS-16362
[
https://issues.apache.org/jira/browse/HDFS-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rakesh Radhakrishnan resolved HDFS-15253.
-
Fix Version/s: 3.4.0
Resolution: Fixed
> Set default throttle value
Thanks Sammi for getting this out!
+1 (binding)
* Verified signatures.
* Built from source.
* Deployed small non-HA un-secure cluster.
* Verified basic Ozone file system.
* Tried out a few basic Ozone shell commands - create, list, delete
* Ran a few Freon benchmark
Thanks Brahma for getting this out!
+1 (binding)
Verified the following and looks fine to me.
* Built from source with CentOS 7.4 and OpenJDK 1.8.0_232.
* Deployed 3-node cluster.
* Verified HDFS web UIs.
* Tried out a few basic hdfs shell commands.
* Ran sample Terasort,
+1
Rakesh
On Fri, Sep 20, 2019 at 12:29 AM Aaron Fabbri wrote:
> +1 (binding)
>
> Thanks to the Ozone folks for their efforts at maintaining good separation
> with HDFS and common. I took a lot of heat for the unpopular opinion that
> they should be separate, so I am glad the process has worke
+1, Thanks for the proposal.
I am interested to participate in this project. Please include me as well
in the project.
Thanks,
Rakesh
On Tue, Sep 3, 2019 at 11:59 AM zhankun tang wrote:
> +1
>
> Thanks for Wangda's proposal.
>
> The submarine project is born within Hadoop, but not limited to H
+1
Thanks,
Rakesh
On Wed, Aug 1, 2018 at 12:08 PM, Uma Maheswara Rao G
wrote:
> Hi All,
>
>
>
> From the positive responses from JIRA discussion and no objections from
> below DISCUSS thread [1], I am converting it to voting thread.
>
>
>
> Last couple of weeks we spent time on testing the fe
Thanks Sammi for getting this out!
+1 (binding)
Verified the following and looks fine to me.
* Built from source.
* Deployed 3 node cluster with NameNode HA.
* Verified HDFS web UIs.
* Tried out HDFS shell commands.
* Ran Mover, Balancer tools.
* Ran sample MapReduc
+1 for the sub-project idea. Thanks to everyone that contributed!
Regards,
Rakesh
On Tue, Mar 27, 2018 at 4:46 PM, Jack Liu wrote:
> +1 (non-binding)
>
>
> On Tue, Mar 27, 2018 at 2:16 AM, Tsuyoshi Ozawa wrote:
>
> > +1(binding),
> >
> > - Tsuyoshi
> >
> > On Tue, Mar 20, 2018 at 14:21 Owen O
Thanks Andrew for getting this out !
+1 (non-binding)
* Built from source on CentOS 7.3.1611, jdk1.8.0_111
* Deployed non-ha cluster and tested few EC file operations.
* Ran basic shell commands(ls, mkdir, put, get, ec, dfsadmin).
* Ran some sample jobs.
* HDFS Namenode UI looks good.
Thanks,
Ra
Thanks Junping for getting this out.
+1 (non-binding)
* Built from source on CentOS 7.3.1611, jdk1.8.0_111
* Deployed 3 node cluster
* Ran some sample jobs
* Ran balancer
* Operate HDFS from command line: ls, put, dfsadmin etc
* HDFS Namenode UI looks good
Thanks,
Rakesh
On Fri, Oct 20, 2017 a
Thanks Junping for getting this out.
+1 (non-binding)
* downloaded and built from source with jdk1.8.0_45
* deployed HDFS-HA cluster
* ran some sample jobs
* run balancer
* executed basic dfs cmds
Rakesh
On Wed, Mar 22, 2017 at 8:30 PM, Jian He wrote:
> +1 (binding)
>
> - built from source
>
May be its due to file permission issues or something else. The test uses
MiniKdc, which is based on Apache Directory Server and is embedded in test
cases. Could you share the complete logs of the failed test, I think you
can look at the your machine/env location:
$HADOOP_HOME/hadoop-hdfs-project/
I hope the following documents will help you, it contains the details about
the way to build and run hadoop test cases. Please take a look at it.
https://github.com/apache/hadoop/blob/branch-2.7.3/BUILDING.txt
http://hadoop.apache.org/docs/r2.7.3/hadoop-auth/BuildingIt.html
Please give few more d
Have you taken multiple thread dumps (jstack) and observed the operations
which are performing during this period of time. Perhaps there could be
high chance of searching for data blocks which it can move around to
balance the cluster.
Could you tell me the used space and available space values. H
Thanks for getting this out.
+1 (non-binding)
- downloaded and built tarball from source
- deployed HDFS-HA cluster and tested few EC file operations
- executed few hdfs commands including EC commands
- viewed basic UI
- ran some of the sample jobs
Best Regards,
Rakesh
Intel
On Thu, Sep 1, 201
If I remember correctly, Huawei also adopted QJM component. I hope @Vinay
might have discussed internally in Huawei before starting this e-mail
discussion thread. I'm +1, for removing the bkjm contrib from the trunk
code.
Also, there are quite few open sub-tasks under HDFS-3399 umbrella jira,
whic
can
> display to the client, do you think stripping would still help ?
> Is there a possibility that since I know that all the segments of the HD
> image would always be read together, by stripping and distributing it on
> different nodes, I am ignoring its special/temporal localit
Thank you Vinod.
+1 (non-binding)
- downloaded and built from source
- deployed HDFS-HA cluster and tested few switching behaviors
- executed few hdfs commands from command line
- viewed basic UI
- ran HDFS/Common unit tests
- checked LICENSE and NOTICE files
Regards,
Rakesh
Intel
On Tue, Jul 2
tting read
requests to fetch all the 'k' chunks(belonging to the same stripe as the
failed chunk) from k data nodes and perform decoding to rebuild the lost
data chunk at the client side.
Regards,
Rakesh
On Fri, Jul 22, 2016 at 5:43 PM, Rakesh Radhakrishnan
wrote:
> Hi Roy,
>
> Th
Hi Roy,
Thanks for the interest in hdfs erasure coding feature and helping us in
making the feature more attractive to the users by sharing performance
improvement ideas.
Presently, the reconstruction work has been implemented in a centralized
manner in which the reconstruction task will be given
Thanks Rui for reporting this.
With "RS-DEFAULT-6-3-64k EC policy" EC file will have 6 data blocks and 3
parity blocks. Like you described initially the cluster has 5 racks, so the
first 5 data blocks will use those racks. Now while adding rack-6,
reconstruction task will be scheduled for placing
22 matches
Mail list logo