Well the challenge is that Spark is best suited to insert a dataframe into
the Oracle table, i.e. a bulk insert
that insert into table (column list) values (..) is a single record insert
.. Can you try creating a staging table in oracle without get_function()
column and do a bulk insert from Spar
Hi Mich,
Thanks for your reply. Please advise the insert query that I need to
substitute should be like below:
Insert into table(a,b,c) values(?,get_function_value(?),?)
In the statement above :
? : refers to value from dataframe column values
get_function_value : refers to be the function wh
I gather you mean using JDBC to write to the Oracle table?
Spark provides a unified framework to write to any JDBC compliant database.
def writeTableWithJDBC(dataFrame, url, tableName, user, password, driver,
mode):
try:
dataFrame. \
write. \
format("jdbc"). \
Hi All,
I am using spark to ingest data from file to database Oracle table . For
one of the fields , the value to be populated is generated from a function
that is written in database .
The input to the function is one of the fields of data frame
I wanted to use spark.dbc.write to perform the op