Hi All,

I have a scenario in (Spark scala/Hive):

Day 1:

i have a file with 5 columns which needs to be processed and loaded into
hive tables.
day2:

Next day the same feeds(file) has 8 columns(additional fields) which needs
to be processed and loaded into hive tables

How do we approach this problem without changing the target table schema.Is
there any way we can achieve this.

Thanks
Anbu



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to