Has anyone tried to integrate Spark with a server farm of RESTful API endpoints 
or even HTTP web-servers for that matter? I know it’s typically done using a 
web farm as the presentation interface, then data flows through a 
firewall/router to direct calls to a JDBC listener that will SELECT, INSERT, 
UPDATE and, at times, DELETE data in a database. Can the same be done using 
Spark SQL Thriftserver on top of, say, HBase, Kudu, Parquet, etc.? Or can Kafka 
be used somewhere? Spark would be an ideal solution as the intermediary because 
it can talk to any data store underneath; so, swapping out a technology at any 
time would be possible.

Just want some ideas.

Thank,
Ben 


---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to