This feature will definitely help cases where we saw a file not found
exception after creating the new file using s3a (spark use to retry task in
that case).

On Wed, Dec 2, 2020 at 2:11 AM Jungtaek Lim <kabhwan.opensou...@gmail.com>
wrote:

> What about S3FileIO implementation? I see some issue filed that even with
> Hive catalog working with S3 brings unexpected issues, and S3FileIO
> supposed to fix the issue (according to Ryan). Is it safe without S3FileIO
> to use Hive catalog + Hadoop API for S3 now?
>
> 2020년 12월 2일 (수) 오후 6:54, Vivekanand Vellanki <vi...@dremio.com>님이 작성:
>
>> Iceberg tables backed by HadoopTables and HadoopCatalog require an atomic
>> rename. This is not yet supported with S3.
>>
>> On Wed, Dec 2, 2020 at 3:20 PM Mass Dosage <massdos...@gmail.com> wrote:
>>
>>> Hello all,
>>>
>>> Yesterday AWS announced that S3 now has strong read-after-write
>>> consistency:
>>>
>>>
>>> https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency
>>>
>>> https://aws.amazon.com/s3/consistency/
>>>
>>> Does this mean that Iceberg tables backed by HadoopTables and
>>> HadoopCatalog can now be used on S3 in addition to HDFS?
>>>
>>> Thanks,
>>>
>>> Adrian
>>>
>>

Reply via email to