It would appear the simple answer is to use the JDBC thriftserver in Spark.

Thanks,
Ben

> On Oct 6, 2016, at 9:38 PM, Matei Zaharia <matei.zaha...@gmail.com> wrote:
> 
> This is exactly what the Spark SQL Thrift server does, if you just want to 
> access it using JDBC.
> 
> Matei
> 
>> On Oct 6, 2016, at 4:27 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
>> 
>> Has anyone tried to integrate Spark with a server farm of RESTful API 
>> endpoints or even HTTP web-servers for that matter? I know it’s typically 
>> done using a web farm as the presentation interface, then data flows through 
>> a firewall/router to direct calls to a JDBC listener that will SELECT, 
>> INSERT, UPDATE and, at times, DELETE data in a database. Can the same be 
>> done using Spark SQL Thriftserver on top of, say, HBase, Kudu, Parquet, 
>> etc.? Or can Kafka be used somewhere? Spark would be an ideal solution as 
>> the intermediary because it can talk to any data store underneath; so, 
>> swapping out a technology at any time would be possible.
>> 
>> Just want some ideas.
>> 
>> Thank,
>> Ben 
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>> 
> 


---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to