I did what you said and I was finally able to update the scheme. But you're
right, it's very dirty, I have to modify almost all the scripts.
The problem of the scripts comes from having already a previous table in
that version, many of the tables or columns that I try to add, already exist
and it gives many errors, but by modifying the script everything is ok.

A clearer way would be that Spark could save a hive table in the
corresponding version, but I do not know how to do it



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to