n the drop table behavior with object storage mode.
-Jack
On Tue, Nov 23, 2021 at 7:30 PM Yan Yan wrote:
> Thank you all for the feedback!
>
> To clarify, *dropTable* method implementation in Iceberg library does do
> its work of cleaning up all data + delete files correctly in norma
control, and I'm not sure if (b) is really a huge concern, I would
prefer the current drop table behavior to continue to be the default, and
users may set a new property if they want to remove the entire directory
when dropping table, to not alter the library behavior. Then we may invent
Piotr made a good point. The major use case to customize data file paths is
the s3 path randomization due to the throttling issue. It looks like an
exceptional use case. I’d also prefer to think of it that way, what if the
s3 throttling issue is resolved, or mitigated to a way users can ignore it
i
Hi,
When you come from storage perspective, then the current design of 'not
owning' location makes sense.
However, if you come from SQL perspective, then all this is impractical
limitation. Analysts and other SQL users want to be able to delete their
data and must have confidence that all the da
+1 for item 1, the fact that we do not remove all data referenced by all
metadata files seems like a bug to me that should be fixed. The table's
pointer is already removed in the catalog with no way to rollback, so there
is no reason for keeping those files around. I don't know if there is any
hist
Hi everyone,
Does anyone know across catalog implementations, when we drop tables with
*purge=true*, why do we only drop last metadata and files referred by it,
but not any of the previous metadata? e.g.
*create iceberg table1*; <--- metadata.json-1
*insert into table1* ...; <--- metadata.json-2