[ 
https://issues.apache.org/jira/browse/SPARK-47618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-47618:
----------------------------------
    Affects Version/s: 4.1.0
                           (was: 4.0.0)

> Use Magic Committer for all S3 buckets by default
> -------------------------------------------------
>
>                 Key: SPARK-47618
>                 URL: https://issues.apache.org/jira/browse/SPARK-47618
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>    Affects Versions: 4.1.0
>            Reporter: Dongjoon Hyun
>            Priority: Major
>              Labels: pull-request-available
>
> This issue aims to use Apache Hadoop `Magic Committer` for all S3 buckets by 
> default in Apache Spark 4.0.0.
> Apache Hadoop `Magic Committer` has been used for S3 buckets to get the best 
> performance since [S3 becomes fully consistent on December 1st, 
> 2020|https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/].
> - 
> https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html#ConsistencyModel
> bq. Amazon S3 provides strong read-after-write consistency for PUT and DELETE 
> requests of objects in your Amazon S3 bucket in all AWS Regions. This 
> behavior applies to both writes to new objects as well as PUT requests that 
> overwrite existing objects and DELETE requests. In addition, read operations 
> on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object 
> Tags, and object metadata (for example, the HEAD object) are strongly 
> consistent.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to