Thanks Steve!

FYI: S3 now supports GET-after-PUT consistency for new objects in all
regions, including US Standard



https://aws.amazon.com/about-aws/whats-new/2015/08/amazon-s3-introduces-new-usability-enhancements/



<https://aws.amazon.com/about-aws/whats-new/2015/08/amazon-s3-introduces-new-usability-enhancements/>

On 23 September 2015 at 13:12, Steve Loughran <ste...@hortonworks.com>
wrote:

>
> On 23 Sep 2015, at 14:56, Michal Čizmazia <mici...@gmail.com> wrote:
>
> To get around the fact that flush does not work in S3, my custom WAL
> implementation stores a separate S3 object per each WriteAheadLog.write
> call.
>
> Do you see any gotchas with this approach?
>
>
>
> nothing obvious.
>
> the blob is PUT in the close() call; once that operation has completed
> then its in S3. For any attempt to open that file to read will immediately
> succeed, now even in US-east if you set the right endpoint:
>
> https://forums.aws.amazon.com/ann.jspa?annID=3112
> http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#Regions
>
> If you can avoid listing operations or overwrites, you avoid the fun there.
>
> You do have to bear in mind that the duration of stream.close() is now
> O(bytes) and may fail -a lot of code assumes it is instant and always
> works...
>

Reply via email to