Hi,
I haven't checked my answer (too lazy today), but think I know what might
be going on.
tl;dr Use cache to preserve the initial set of rows from mysql
After you append new rows, you will have twice as many rows as you had
previously. Correct?
Since newDF references the table every time you u
Sorry, last mail format was not good.
println("Going to talk to mySql")
// Read table from mySQL.
val mysqlDF = spark.read.jdbc(jdbcUrl, table, properties)
println("I am back from mySql")
mysqlDF.show()
// Create a new Dataframe with column 'id' increased to avoid Duplicate
primary keys
va