Hi, thanks for the reply.
Let’s imagine we have a parquet based table called parquet_table, now I want to
insert it into a new JDBC table, all using pure SQL.
If the JDBC table already exists, it’s easy, we do CREATE TABLE USING JDBC and
then we do INSERT INTO that table.
If the table doesn’t
Generally, the problem is that I don’t find a way to automatically create a
JDBC table in the JDBC database when I want to insert data into it using Spark
SQL only, not DataFrames API.
> On 2 Feb 2023, at 21:22, Harut Martirosyan
> wrote:
>
> Hi, thanks for the reply.
>
> Let’s imagine we ha
Please bear in mind that insert/update delete operations are DML,
whereas CREATE/DROP TABLE are DDL operations that are best performed in the
native database which I presume is a transactional.
Can you CREATE TABLE before (any insert of data) using the native JDBC
database syntax?
Alternatively y
Thank you very much.
I understand the performance implications and that Spark will download it
before modifying.
The JDBC database is just extremely small, it’s the BI/aggregated layer.
What’s interesting is that here it says I can use JDBC
https://spark.apache.org/docs/3.3.1/sql-ref-syntax-dm