[
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838382#comment-17838382
]
ASF GitHub Bot commented on HADOOP-18679:
-----------------------------------------
mukund-thakur commented on code in PR #6726:
URL: https://github.com/apache/hadoop/pull/6726#discussion_r1569566843
##########
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/aws_sdk_upgrade.md:
##########
@@ -324,6 +324,7 @@ They have also been updated to return V2 SDK classes.
public interface S3AInternals {
S3Client getAmazonS3V2Client(String reason);
+ S3AStore getStore();
Review Comment:
this is a doc file.
> Add API for bulk/paged object deletion
> --------------------------------------
>
> Key: HADOOP-18679
> URL: https://issues.apache.org/jira/browse/HADOOP-18679
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.3.5
> Reporter: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> iceberg and hbase could benefit from being able to give a list of individual
> files to delete -files which may be scattered round the bucket for better
> read peformance.
> Add some new optional interface for an object store which allows a caller to
> submit a list of paths to files to delete, where
> the expectation is
> * if a path is a file: delete
> * if a path is a dir, outcome undefined
> For s3 that'd let us build these into DeleteRequest objects, and submit,
> without any probes first.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]