To ask a related question, if I use Zookeeper for table locking, will this 
affect all attempts to access the Hive tables (including those from my Spark 
applications) or only those made through the Thriftserver? In other words, does 
Zookeeper provide concurrency for the Hive metastore in general or only for 
Hiveserver2/Spark's Thriftserver?

Thanks!

From: Tim Schweichler 
<tim.schweich...@healthination.com<mailto:tim.schweich...@healthination.com>>
Date: Monday, December 15, 2014 at 10:56 AM
To: "user@spark.apache.org<mailto:user@spark.apache.org>" 
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: integrating long-running Spark jobs with Thriftserver

Hi everybody,

I apologize if the answer to my question is obvious but I haven't been able to 
find a straightforward solution anywhere on the internet.

I have a number of Spark jobs written using the python API that do things like 
read in data from Amazon S3 to a main table in the Hive metastore, perform 
intensive calculations on that data to build derived/aggregated tables, etc. I 
also have Tableau set up to read those tables via the Spark Thriftserver.

My question is how best to integrate those two sides of Spark. I want to have 
the Thriftserver constantly running so that Tableau can update its extracts on 
a scheduled basis and users can manually query those tables as needed, but I 
also need to run those python jobs on a scheduled basis as well. What's the 
best way to do that? The options I'm considering are as follows:


  1.  Simply call the python jobs via spark-submit, scheduled by cron. My 
concern here is concurrency issues if Tableau or a user tries to read from a 
table at the same time that a job is rebuilding/updating that table. To my 
understanding the Thriftserver is designed to handle concurrency, but Spark in 
general is not if two different Spark contexts are attempting to access the 
same data (as would be the case with this approach.) Am I correct in that 
thinking or is there actually no problem with this method?
  2.  Call the python jobs through the Spark Thriftserver so that the same 
Spark context is used. My question here is how to do that. I know one can call 
a python script as part of a HiveQL query using TRANSFORM, but that seems to be 
designed more for performing quick calculations on existing data as part of a 
query rather than building tables in the first place or calling long-running 
jobs that don't return anything (again, am I correct in this thinking or would 
this actually be a viable solution?) Is there a different way to call 
long-running Spark jobs via the Thriftserver?

Are either of these good approaches or is there a better way that I'm missing?

Thanks!

Reply via email to