Yes, that's true but this is not (yet) the case of the Openstack Swift S3
API
Le mar. 2 févr. 2021 à 21:41, Henoc a écrit :
> S3 is strongly consistent now
> https://aws.amazon.com/s3/consistency/
>
> Regards,
> Henoc
>
> On Tue, Feb 2, 2021, 10:27 PM David Morin
>
Hi,
I have some issues at the moment with S3 API of Openstack Swift (S3a).
This one is eventually consistent and it causes lots of issues with my
distributed jobs in Spark.
Is the S3A committer able to fix that ? Or an "S3guard like" implementation
is the only way ?
David
7a3d68f7%40%3Cuser.spark.apache.org%3E
> Probably we may want to add it in the SS guide doc. We didn't need it as
> it just didn't work with eventually consistent model, and now it works
> anyway but is very inefficient.
>
>
> On Thu, Dec 24, 2020 at 6:16 AM David Morin
> wrote:
Does it work with the standard AWS S3 solution and its new consistency model
<https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/>
?
Le mer. 23 déc. 2020 à 18:48, David Morin a
écrit :
> Thanks.
> My Spark applications run on nodes based on docke
t; nodes should be able to read it immediately
>
> The solutions/workarounds depend on where you are hosting your Spark
> application.
>
>
>
> *From: *David Morin
> *Date: *Wednesday, December 23, 2020 at 11:08 AM
> *To: *"user@spark.apache.org"
> *Subject:
Hello,
I have an issue with my Pyspark job related to checkpoint.
Caused by: org.apache.spark.SparkException: Job aborted due to stage
failure: Task 3 in stage 16997.0 failed 4 times, most recent failure: Lost
task 3.3 in stage 16997.0 (TID 206609, 10.XXX, executor 4):
java.lang.IllegalStateExcep