Instead of exploring possible operations ourselves, I think we should
follow the SQL standard.

Most of these do. We should make conscious decisions with the standard in
mind for the SQL API. But we also have the Scala API (and versions of it in
other languages) and need to consider how these operations are invoked from
there.

There is something more we need to take care, like ALTER TABLE.

Yes, I was excluding the simple commands like this to focus on the commands
that may need to make behavior guarantees. I think those commands are
related to the 5 concerns I listed.

Another way to think about this is: Alter table isn’t something you could
reasonably combine with a write for a single atomic operation.

ReplaceTable, RTAS:
Most of the mainstream databases don’t support these 2. I think
drop-all-data operation is dangerous and we should only allow users to do
it with DROP TABLE.

I don’t think it would be too confusing for users to have REPLACE TABLE
commands. The fact that the old table is dropped is clear.

There’s a good use case for this. We have analysts that produce a report
table every day. They can overwrite the entire table with new data each
day, but they prefer to drop the previous table and create a new one with
CTAS because they don’t want to care about schema evolution. They make no
guarantees about the table schema, so they use an operation, CTAS, that
doesn’t constrain their work. No need to alter a table that is getting
replaced.

Given a reasonable use case for dropping and recreating a table with CTAS,
I think there’s a good argument for an atomic REPLACE TABLE AS SELECT
operation. My users don’t want to drop the previous report table until the
new one is ready, and they never want a period of time when report data is
unavailable.

This is why I think it is a good idea to consider each of these. Just
because it didn’t make sense in another DB to support RTAS doesn’t mean
there isn’t a reason to do it now.

As for REPLACE TABLE that isn’t RTAS, I don’t think there’s a good use case
because it is unlikely that we need an atomic operation. Not much goes
wrong in a table create, and we don’t want to confuse users, who should
generally use ALTER TABLE for schema evolution.

DeleteFrom, ReplaceData:
These 2 are SQL standard, but in a more general form. DELETE, UPDATE, MERGE
are the most common SQL statements to change the data.

My point is that we should care what users are trying to do and whether we
should support a combined atomic operation.

Replacing data is an operation that I see our data engineers using all the
time. Spark already supports INSERT OVERWRITE ... PARTITION that replaces
all data in a partition. We use this to continuously update summary tables
as data arrives. Each hour of fact data gets added to the daily summary
rollup and replaces the last summary written. Clearly, this should be an
atomic operation, and it currently is.

The question for v2 is: how do we perform this same operation with the v2
API?

A transaction made of from a delete and an insert would work. Is this what
we want to use? How do we add this to v2?

rb
​
-- 
Ryan Blue
Software Engineer
Netflix

Reply via email to