+1 for Ozone. We are in our own repo now. It would be good to remove this
code from Hadoop, otherwise it will confuse new contributors.
I would like to add a git tag tro Hadoop, so that people have the ability
to sync back and see the code evolution.
--Anu
On Thu, Oct 24, 2019 at 4:03 PM Giovanni
@Vinod Kumar Vavilapalli
Do we need a separate vote thread for this? there are already JIRAs in
place for ozone code removal and I gather it is same for Submarine.
Would it be possible to treat this thread as consensus and act upon the
JIRA itself?
Thanks
Anu
On Sun, Oct 27, 2019 at 6:58 PM 俊
+1
— Anu
> On Dec 6, 2019, at 5:26 PM, Dinesh Chitlangia wrote:
>
> All,
> Since the Apache Hadoop Ozone 0.4.1 release, we have had significant
> bug fixes towards performance & stability.
>
> With that in mind, 0.4.2 release would be good to consolidate all those fixes.
>
> Pls share your t
+1
> On Jan 23, 2020, at 2:51 PM, Jitendra Pandey
> wrote:
>
> +1 for the feature branch.
>
>> On Thu, Jan 23, 2020 at 1:34 PM Wei-Chiu Chuang
>> wrote:
>>
>> Hi we are working on a feature to improve Erasure Coding, and I would like
>> to seek your opinion on creating a feature branch fo
Hi All,
I would like to propose moving Ozone from 'Alpha' tags to 'Beta' tags when
we do future releases. Here are a couple of reasons why I think we should
make this move.
1. Ozone Manager or the Namenode for Ozone scales to more than 1 billion
keys. We tested this in our labs in an org
+1
—Anu
> On May 13, 2020, at 12:53 AM, Elek, Marton wrote:
>
>
>
> I would like to start a discussion to make a separate Apache project for Ozone
>
>
>
> ### HISTORY [1]
>
> * Apache Hadoop Ozone development started on a feature branch of Hadoop
> repository (HDFS-7240)
>
> * In the Oc
+1 ( Non-binding)
- Downloaded 2.6.1 — created a cluster with namenode and a bunch of data nodes.
- verified that Rolling upgrade and Rollback options work correctly in moving
from 2.61 to 2.6.2
—Anu
On 10/22/15, 2:14 PM, "sjl...@gmail.com on behalf of Sangjin Lee"
wrote:
>Hi all,
>
>I ha
Could you please attach the PDFs to the JIRA. I think the mailer is stripping
them off from the mail.
Thanks
Anu
On 9/5/17, 9:44 AM, "Daniel Templeton" wrote:
>Resending with a broader audience, and reattaching the PDFs.
>
>Daniel
>
>On 9/4/17 9:01 AM, Daniel Templeton wrote:
>> All, in pr
Hi Wangda,
We are planning to start the Ozone merge discussion by the end of this month. I
am hopeful that it will be merged pretty soon after that.
Please add Ozone to the list of features that are being tracked for Apache
Hadoop 3.1.
We would love to release Ozone as an alpha feature in Had
Hi Steve,
In addition to everything Weiwei mentioned (chapter 3 of user guide), if you
really want to drill down to REST protocol you might want to apply this patch
and build ozone.
https://issues.apache.org/jira/browse/HDFS-12690
This will generate an Open API (https://www.openapis.org , http
-1 (binding)
Thank you for all the hard work on 2.9 series. Unfortunately, this is one of
the times I have to -1 this release.
Looks like HADOOP-14840 added a dependency on “oj! Algorithms - version 43.0”,
but we have just added “oj! Algorithms - version 43.0” to the
“LICENSE.txt”. The right a
t; This has been a long effort and we're grateful for the support we've
> received from the community. In particular, thanks to Íñigo Goiri,
> Andrew Wang, Anu Engineer, Steve Loughran, Sean Mackrory, Lukas
> Majercak, Uma Gunuganti, Kai Zheng, Rakesh Radhakrishnan, Sr
Hi Eddy,
Thanks for driving this release. Just a quick question, do we have time to
close this issue?
https://issues.apache.org/jira/browse/HDFS-12990
or are we abandoning it? I believe that this is the last window for us to fix
this issue.
Should we have a call and get this resolved one way
Hi All,
I wanted to bring to your attention that HDFS-12990 has been committed to trunk
and branch 3.0.1.
This change reverts the Namenode RPC port to the familiar 8020, making it same
as Apache Hadoop 2.x series.
In Hadoop 3.0.0 release, the default port is 9820. If you have deployed Hadoop
3
Hi Owen,
>> 1. It is hard to tell what has changed. git rebase -i tells me the
>> branch has 722 commits. The rebase failed with a conflict. It would really
>> help if you rebased to current trunk.
Thanks for the comments. I have merged trunk to HDFS-7240 branch.
Hopefully, this makes it
Not exactly what you want, but here are the docs from the user perspective.
http://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/filesystem/introduction.html#Path_Names
--Anu
On 3/5/18, 5:08 PM, "Zsolt Venczel" wrote:
Hi Hdfs Devs,
While focusing on https://issu
+1 (binding). Thanks for all the hard work and getting this client ready.
It is nice to have an official and supported native client for HDFS.
Thanks
Anu
On 3/13/18, 8:16 PM, "Mukul Kumar Singh" wrote:
+1 (binding)
Thanks,
Mukul
On 14/03/18, 2:06 AM, "Owen O'Malley"
Hi Owen,
Thanks for the proposal. I was hoping for same releases, but I am okay with
different releases as well.
@Konstantin, I am completely open to the name changes, let us discuss that
in HDFS-10419
and we can make the corresponding change.
--Anu
On 3/19/18, 10:52 AM, "Owen O'Mal
+1 (Binding)
--Anu
On 3/20/18, 11:21 AM, "Owen O'Malley" wrote:
All,
Following our discussions on the previous thread (Merging branch HDFS-7240
to trunk), I'd like to propose the following:
* HDSL become a subproject of Hadoop.
* HDSL will release separately from
Hi Andrew,
Thanks for your comment.
>” Having to delete it each time means more work for mainline RMs and more room
>for error.”
The current change that we have done has a maven profile called “–Phdsl,"
without this flag it
will not be compiled and will not be included in source or binary pac
Hi Lei/Owen,
Based on Daryn’s suggestion, we have made HDSL a loadable module inside data
node.
It relies on the current loadable module support that is already present in
HDFS.
This module is loaded by Datanodes only when it is configured.
--Anu
On 3/22/18, 11:25 AM, "Owen O'Malley" wrote
Would it be possible to add a maven flag like –skipShade, that helps in
reducing the compile time for people who does not need to build libhdfs++ ?
Thanks
Anu
From: Jim Clampffer
Date: Tuesday, March 27, 2018 at 11:09 AM
To: Eric Badger
Cc: Deepak Majeti , Jitendra Pandey
, Anu Engineer
+1
--Anu
On 4/16/18, 4:49 PM, "Jitendra Pandey" wrote:
Hi All,
The community unanimously voted (https://s.apache.org/HDDSMergeResult)
to adopt
HDDS/Ozone as a sub-project of Hadoop, here is the formal vote for code
merge.
Here is a quick summary of the code change
Hi St.ack/Wei-Chiu,
It is very kind of St.Ack to bring this question to HDFS Dev. I think this is a
good feature to have. As for the branch question,
HDFS-9924 branch is already open, we could just use that and I am +1 on adding
Duo as a branch committer.
I am not familiar with HBase code base
developers of the feature that
should decide what goes in, what to call the branch etc. But It would be nice
to have
some sort of continuity of HDFS-9924.
Thanks
Anu
From: on behalf of Stack
Date: Thursday, May 3, 2018 at 9:04 PM
To: Anu Engineer
Cc: Wei-Chiu Chuang , "hdfs-dev@hadoop.apach
+1, I have code reviewed many of these changes and it is an essential set of
changes for HDDS/Ozone.
Thank you for getting this done.
Thanks
Anu
On 6/29/18, 3:14 PM, "Bharat Viswanadham" wrote:
Fixing subject line of the mail.
Thanks,
Bharat
On 6/29/18
Based on conversations with Giovanni and Subru, I have pushed a revert for this
merge.
Thanks
Anu
On 7/5/18, 12:55 PM, "Giovanni Matteo Fumarola"
wrote:
+ common-dev and hdfs-dev as fyi.
Thanks Subru and Sean for the answer.
On Thu, Jul 5, 2018 at 12:14 PM, Subru Krishn
+1, on the Non-Routable Idea. We like it so much that we added it to the Ozone
roadmap.
https://issues.apache.org/jira/browse/HDDS-231
If there is consensus on bringing this to Hadoop in general, we can build this
feature in common.
--Anu
On 7/5/18, 1:09 PM, "Sean Busbey" wrote:
I reall
I ran “git revert -c c163d1797ade0f47d35b4a44381b8ef1dfec5b60 -m 1”
that will remove all changes from Giovanni’s branch (There are 3 YARN commits).
I am presuming that he can recommit the dropped changes directly into trunk.
I do not know off a better way than to lose changes from his branch. I
Hi All,
[ Thanks to Arpit for working offline and verifying that branch is indeed good.]
I want to summarize what I know of this issue and also solicit other points of
view.
We reverted the commit(c163d1797) from the branch, as soon as we noticed it.
That is, we have made no other commits afte
July 6, 2018 at 10:59 AM
> To: Vinod Kumar Vavilapalli
> Cc: Anu Engineer , Arpit Agarwal <
> aagar...@hortonworks.com>, "su...@apache.org" , "
> yarn-...@hadoop.apache.org" , "
> hdfs-dev@hadoop.apache.org" , &qu
+1, It will allow many users to get a first look at Ozone/HDDS.
Thanks
Anu
On 8/6/18, 10:34 AM, "Elek, Marton" wrote:
Hi All,
I would like to discuss creating an Alpha release for Ozone. The core
functionality of Ozone is complete but there are two missing features;
Se
e refer to HDFS-10285 for discussion details.
This has been a long effort and we're grateful for the support we've
received from the community. In particular, thanks to Andrew Wang, Anoop
Sam John, Anu Engineer, Chris Douglas, Daryn Sharp, Du Jingcheng , Ewan
Hig
> Given that there are some Ozone components spread out past the core maven
> modules, is the plan to release a Hadoop Trunk + Ozone tar ball or is more
> work going to go into segregating the Ozone components prior to release?
The official release will be a source tarball, we intend to release a
omes from that.
Thanks
Anu
On 8/8/18, 1:19 PM, "Allen Wittenauer" wrote:
> On Aug 8, 2018, at 12:56 PM, Anu Engineer
wrote:
>
>> Has anyone verified that a Hadoop release doesn't have _any_ of the
extra ozone bits that are sprinkled outside
Just reverted, Thanks for root causing this.
Thanks
Anu
On 8/15/18, 9:37 AM, "Allen Wittenauer"
wrote:
> On Aug 15, 2018, at 4:49 AM, Kitti Nánási
wrote:
>
> Hi All,
>
> We noticed that the checkstyle run by the pre commit job started to show
> false positive
I believe that you need to regenerate the site using ‘hugo’ command (hugo is a
site builder). Then commit and push the generated files.
Thanks
Anu
On 9/22/18, 9:56 AM, "俊平堵" wrote:
Martin, thanks for your reply. It works now, but after git changes - I
haven’t seen Apache Hadoop websit
Hi Marton,
+1 (binding)
1. Verified the Signature
2. Verified the Checksums - MD5 and Sha*
3. Build from Sources.
4. Ran all RPC and REST commands against the cluster via Robot.
5. Tested the OzoneFS functionality
Thank you very much for creating the first release of Ozone.
--Anu
On 9/19/18,
http://blog.rodeo.io/2016/01/24/kudu-as-a-more-flexible-kafka.html?v2
—Anu
I actively work on two branches (Diskbalancer and ozone) and I agree with most
of what Sangjin said.
There is an overhead in working with branches, there are both technical costs
and administrative issues
which discourages developers from using branches.
I think the biggest issue with branch b
AM, "Colin McCabe" wrote:
>On Sun, Jun 12, 2016, at 05:06, Steve Loughran wrote:
>> > On 10 Jun 2016, at 20:37, Anu Engineer wrote:
>> >
>> > I actively work on two branches (Diskbalancer and ozone) and I agree with
>> > most of what Sangjin sai
Hi All,
I would like to propose a merge vote for HDFS-1312 (Disk balancer) branch to
trunk. This branch creates a new tool that allows balancing of data on a
datanode.
The voting commences now and will run for 7 days till Jun/22/2016 5:00 PM PST.
This tool distributes data evenly between the
ras but they need not hold up the merge. The
>>> documentation looks great.
>>>
>>> +1 for merging with HDFS-10557 fixed.
>>>
>>>
>>> On 6/15/16, 5:38 PM, "Anu Engineer" wrote:
>>>
>>> Hi All,
>>>
>>&g
+1, Thanks for the effort. It brings in a world of consistency to the hadoop
vars; and as usual reading your bash code was very educative.
I had a minor suggestion though. since we have classified the _OPTS to client
and daemon opts, for new people it is hard to know which of these subcommands
rience.
Thanks
Anu
On 9/9/16, 3:06 PM, "Allen Wittenauer" wrote:
>
>> On Sep 9, 2016, at 2:15 PM, Anu Engineer wrote:
>>
>> +1, Thanks for the effort. It brings in a world of consistency to the hadoop
>> vars; and as usual reading your bash code was very e
Hi Andrew,
Thank you for all the hard work. I am really excited to see us making progress
towards a 3.0 release.
+1 (Non-Binding)
1. Deployed the downloaded bits on 4 node cluster with 1 Namenode and 3
datanodes.
2. Verified all normal HDFS operations like create directory, create file ,
del
Hi Margus,
Thanks for reporting this issue, Chen has just fixed it. I will commit this
change as soon as we get a Jenkins run.
https://issues.apache.org/jira/browse/HDFS-11386
if you can watch that JIRA and pull when it is committed, it will make it
easier for you.
Thanks
Anu
On 2/2/17, 1:
Hi Allen,
https://issues.apache.org/jira/browse/INFRA-13902
That happened with ozone branch too. It was an inadvertent force push. Infra
has advised us to force push the latest branch if you have it.
Thanks
Anu
On 4/17/17, 7:10 AM, "Allen Wittenauer" wrote:
>Looks like someone reset HEAD b
Hi All,
Looks like we are having failures in the Jenkins pipeline. Would someone with
access to build machines be able to take a look ? Not able to see human
readable build logs from builds.apache.org.
I can see a message saying builds have been broken since build #19584.
Thanks in advance
Anu
Scratch that , it looks like Jenkins is just really slow in picking up the
patches. Failures are all normal.
Thanks
Anu
On 6/1/17, 10:04 AM, "Anu Engineer" wrote:
>Hi All,
>
>Looks like we are having failures in the Jenkins pipeline. Would someone with
>access to
Hi Erik,
Looking forward to the release of this tool. Thank you very much for the
contribution.
Had a couple of questions about how the tool works.
1. Would you be able to provide the traces along with this tool? In other
words, would I be able to use this out of the box, or do I have to build
Hi All,
I just deployed a test cluster with Nandakumar and we were able to run corona
from a single node, with 10 thread for 12 mins.
We were able to write 789 MB and were writing 66 keys per second from a single
node.
***
Number of Volumes creat
Sorry, it was meant for a wrong alias. My apologies.
—Anu
On 7/20/17, 2:40 PM, "Anu Engineer" wrote:
>Hi All,
>
>I just deployed a test cluster with Nandakumar and we were able to run corona
>from a single node, with 10 thread for 12 mins.
>
>We were able to w
+1. (Binding)
Thanks for getting this release done. Verified the signatures and S3 Gateway.
--Anu
On 11/16/18, 5:15 AM, "Shashikant Banerjee" wrote:
+1 (non-binding).
- Verified signatures
- Verified checksums
- Checked LICENSE/NOTICE files
- Built from source
Hi Daryn,
I have just started reading the patch. Hence my apologies if my question has a
response somewhere hidden in the patch.
Are you concerned that FSEditLock is taken in GlobalStateIdContext on Server
side, and worried that a malicious or stupid client would
cause this lock to be held up
Please see the mail below. This mail is to vote for moving the Apache ozone
website from
https://git-wip-us.apache.org/repos/asf/hadoop-ozonesite.git
to a new location on gitbox.
I am not doing a separate discussion thread since Apache INFRA is not giving us
any choices.
The only question is
+1
--Anu
On 12/10/18, 6:38 PM, "Vinayakumar B" wrote:
+1
-Vinay
On Mon, 10 Dec 2018, 1:22 pm Elek, Marton
> Thanks Akira,
>
> +1 (non-binding)
>
> I think it's better to do it now at a planned date.
>
> If I understood well the only bigger task
Hi All,
I would like to propose a merge of HDDS-4 branch to the Hadoop trunk.
HDDS-4 branch implements the security work for HDDS and Ozone.
HDDS-4 branch contains the following features:
- Hadoop Kerberos and Tokens support
- A Certificate infrastructure used by Ozone and HDDS.
- Aud
Since I have not heard any concerns, I will start a VOTE thread now.
This vote will run for 7 days and will end on Jan/18/2019 @ 8:00 AM PST.
I will start with my vote, +1 (Binding)
Thanks
Anu
-- Forwarded message -
From: Anu Engineer
Date: Mon, Jan 7, 2019 at 5:10 PM
Subject
+1, (Binding)
Deployed a pseudo-distributed cluster.
Tried out HDFS commands and verified everything works.
--Anu
On 1/14/19, 11:26 AM, "Virajith Jalaparti" wrote:
Thanks Sunil and others who have worked on the making this release happen!
+1 (non-binding)
- Built
With twelve +1 votes (9 Binding and 3 Non-Binding) and no -1 or 0, this
vote passes. Thank you all for voting. We will merge HDDS-4 branch to Ozone
soon.
Thanks
Anu
On Fri, Jan 11, 2019 at 7:40 AM Anu Engineer wrote:
> Since I have not heard any concerns, I will start a VOTE thread
Marton please correct me I am wrong, but I believe that without this branch it
is hard for us to push to Apache DockerHub. This allows for Apache account
integration and dockerHub.
Does YARN publish to the Docker Hub via Apache account?
Thanks
Anu
On 1/29/19, 4:54 PM, "Eric Yang" wrote:
>> I propose to adopt Ozone model: which is the same master branch, different
>> release cycle, and different release branch. It is a great example to show
>> agile release we can do (2 Ozone releases after Oct 2018) with less
>> overhead to setup CI, projects, etc.
I second this, especially this
+1
--Anu
On 2/1/19, 3:02 PM, "Jonathan Hung" wrote:
+1. Thanks Wangda.
Jonathan Hung
On Fri, Feb 1, 2019 at 2:25 PM Dinesh Chitlangia <
dchitlan...@hortonworks.com> wrote:
> +1 (non binding), thanks Wangda for organizing this.
>
> Regards,
> D
Konstantin,
Just a nitpicky thought, if we move this branch to Java-8 on Jenkins, but still
hope to release code that can run on Java 7, how will we detect
Java 8 only changes? I am asking because till now whenever I checked in Java 8
features in branch-2 Jenkins would catch that issue.
With t
+1, on AssertJ usage, thanks for getting this done.
--Anu
On 3/31/19, 9:37 PM, "Akira Ajisaka" wrote:
Hi folks,
Now I'm going to upgrade the JUnit version from 4 to 5 for Java 11 support.
I wanted to start with the small module, so I uploaded a patch to upgrade
the API in h
+1 (Binding)
-- Verified the checksums.
-- Built from sources.
-- Sniff tested the functionality.
--Anu
On Mon, Apr 15, 2019 at 4:09 PM Ajay Kumar
wrote:
> Hi all,
>
> We have created the second release candidate (RC1) for Apache Hadoop Ozone
> 0.4.0-alpha.
>
> This release contains security
+1 (Binding)
-- Built from sources.
-- Ran smoke tests and verified them.
--Anu
On Sun, May 5, 2019 at 8:05 PM Xiaoyu Yao wrote:
> +1 Binding. Thanks all who contributed to the release.
>
> + Download sources and verify signature.
> + Build from source and ran docker-based ad-hot security tes
Is it possible to unprotect the branches and not the trunk? Generally, a
force push to trunk indicates a mistake and we have had that in the past.
This is just a suggestion, even if this request is not met, I am still +1.
Thanks
Anu
On Tue, May 14, 2019 at 4:58 AM Takanobu Asanuma
wrote:
> +
For Ozone, we have started using the Wiki itself as the agenda and after
the meeting is over, we convert it into the meeting notes.
Here is an example, the project owner can edit and maintain it, it is like
10 mins work - and allows anyone to add stuff into the agenda too.
https://cwiki.apache.org
Why not create a dashboard like this, and make it world readable. We use
this for tracking all Ozone JIRAs.
https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12332610
On Mon, Jun 24, 2019 at 10:32 AM Chao Sun wrote:
> Thanks Wei-Chiu for sharing this great list! I'll try to hel
+1, for the branch idea. Just FYI, Your biggest problem is proving that
Hadoop and the downstream projects work correctly after you upgrade core
components like Protobuf.
So while branching and working on a branch is easy, merging back after you
upgrade some of these core components is insanely har
t; >
> >> >> Thanx Vinay for the initiative, Makes sense to add support for
> >> different
> >> >> architectures.
> >> >>
> >> >> +1, for the branch idea.
> >> >> Good Luck!!!
> >> >>
> >> >>
+1
—Anu
> On Sep 17, 2019, at 2:49 AM, Elek, Marton wrote:
>
>
>
> TLDR; I propose to move Ozone related code out from Hadoop trunk and store it
> in a separated *Hadoop* git repository apache/hadoop-ozone.git
>
>
>
>
> When Ozone was adopted as a new Hadoop subproject it was proposed[1]
Hi All,
Just FYI. I have created a new branch to pursue the decommission work for
Ozone data nodes. The branch is called "HDDS-1880-Decom" and the work is
tracked in
https://issues.apache.org/jira/browse/HDDS-1880
Thanks
Anu
During ApacheCon, Las Vegas, I was encouraged to share these meeting notes
in the apache mailing lists. So please forgive me for the weekly spam. I
had presumed that people know of this weekly sync-ups and hence not been
posting notes to the mailing list. Please take a look at older meeting
notes i
https://cwiki.apache.org/confluence/display/HADOOP/2019-09-30+Meeting+notes
--Anu
https://cwiki.apache.org/confluence/display/HADOOP/2019-10-07+Meeting+notes
-- Anu
+1, Binding.
Verified the KEYS
Built from sources and ran tests:
- General Ozone command line tests
- Applications like MR and YARN.
--Anu
On Sat, Oct 12, 2019 at 10:25 AM Xiaoyu Yao
wrote:
> +1 binding. Verified
> * Verify the signature.
> * Build from source.
> * Deploy docker compose
Anu Engineer created HDDS-2374:
--
Summary: Make Ozone Readme.txt point to the Ozone websites instead
of Hadoop.
Key: HDDS-2374
URL: https://issues.apache.org/jira/browse/HDDS-2374
Project: Hadoop
[
https://issues.apache.org/jira/browse/HDDS-2374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Anu Engineer resolved HDDS-2374.
Fix Version/s: 0.5.0
Resolution: Fixed
merged to the master
> Make Ozone Readme.txt po
[
https://issues.apache.org/jira/browse/HDDS-426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Anu Engineer resolved HDDS-426.
---
Fix Version/s: 0.5.0
Resolution: Fixed
Looks like HDDS-1551 added Creation Time to bucketInfo
[
https://issues.apache.org/jira/browse/HDDS-426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Anu Engineer reopened HDDS-426:
---
> Add field modificationTime for Volume and Buc
[
https://issues.apache.org/jira/browse/HDDS-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Anu Engineer resolved HDDS-2366.
Fix Version/s: 0.5.0
Resolution: Fixed
Committed to the master branch. [~swagle] Thank you
[
https://issues.apache.org/jira/browse/HDDS-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Anu Engineer resolved HDDS-1847.
Fix Version/s: 0.5.0
Resolution: Fixed
I have committed this patch to the master branch
Anu Engineer created HDDS-2404:
--
Summary: Add support for Registered id as service identifier for
CSR.
Key: HDDS-2404
URL: https://issues.apache.org/jira/browse/HDDS-2404
Project: Hadoop Distributed
Anu Engineer created HDDS-2417:
--
Summary: Add the list trash command to the client side
Key: HDDS-2417
URL: https://issues.apache.org/jira/browse/HDDS-2417
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2418:
--
Summary: Add the list trash command server side handling.
Key: HDDS-2418
URL: https://issues.apache.org/jira/browse/HDDS-2418
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2419:
--
Summary: Add the core logic to process list trash command.
Key: HDDS-2419
URL: https://issues.apache.org/jira/browse/HDDS-2419
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2420:
--
Summary: Add the Ozone shell support for list-trash command.
Key: HDDS-2420
URL: https://issues.apache.org/jira/browse/HDDS-2420
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2421:
--
Summary: Add documentation for list trash command.
Key: HDDS-2421
URL: https://issues.apache.org/jira/browse/HDDS-2421
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2422:
--
Summary: Add robot tests for list-trash command.
Key: HDDS-2422
URL: https://issues.apache.org/jira/browse/HDDS-2422
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2423:
--
Summary: Add the recover-trash command client side code
Key: HDDS-2423
URL: https://issues.apache.org/jira/browse/HDDS-2423
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2424:
--
Summary: Add the recover-trash command server side handling.
Key: HDDS-2424
URL: https://issues.apache.org/jira/browse/HDDS-2424
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2425:
--
Summary: Support the ability to recover-trash to a new bucket.
Key: HDDS-2425
URL: https://issues.apache.org/jira/browse/HDDS-2425
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2426:
--
Summary: Support recover-trash to an existing bucket.
Key: HDDS-2426
URL: https://issues.apache.org/jira/browse/HDDS-2426
Project: Hadoop Distributed Data Store
Anu Engineer created HDDS-2428:
--
Summary: Rename a recovered file as .recovered if the file already
exists in the target bucket.
Key: HDDS-2428
URL: https://issues.apache.org/jira/browse/HDDS-2428
Anu Engineer created HDDS-2429:
--
Summary: Recover-trash should warn and skip if the key is GDPR-ed
key that recovery is pointless since the encryption keys are lost.
Key: HDDS-2429
URL: https://issues.apache.org
Anu Engineer created HDDS-2430:
--
Summary: Recover-trash should warn and skip if at-rest encryption
is enabled and keys are missing.
Key: HDDS-2430
URL: https://issues.apache.org/jira/browse/HDDS-2430
Anu Engineer created HDDS-2431:
--
Summary: Add recover-trash command to the ozone shell.
Key: HDDS-2431
URL: https://issues.apache.org/jira/browse/HDDS-2431
Project: Hadoop Distributed Data Store
1 - 100 of 520 matches
Mail list logo