hourly/EOD jobs? still not
decided how frequently we would need to do that. It will be based on user
requirement. In case they need real time data we need to think of an
alternative? How are you doing the same for Sybase? How you sync real time?
Thank you!!
Regards,
Tapan Upadhyay
+1 973 652
Hi,
We are planning to move our adhoc queries from teradata to spark. We have
huge volume of queries during the day. What is best way to go about it -
1) Read data directly from teradata db using spark jdbc
2) Import data using sqoop by EOD jobs into hive tables stored as parquet
and then run qu