Sorry, i guess i hit the send button too soon....

This question is regarding a spark stand-alone cluster. My understanding is
spark is an execution engine and not a storage layer.
Spark processes data in memory but when someone refers to a spark table
created through sparksql(df/rdd) what exactly are they referring to?

Could it be a Hive table? If yes, is it the same hive store that spark uses?
Is it a table in memory? If yes, how can an external app access this
in-memory table? if JDBC what driver ?

On a databricks cluster -- could they be referring spark table created
through sparksql(df/rdd) as hive or deltalake table?

Spark version with hadoop : spark-2.0.2-bin-hadoop2.7

Thanks and appreciate your help!!
Ajay.



On Thu, Jul 11, 2019 at 12:19 PM infa elance <infa.ela...@gmail.com> wrote:

> This is stand-alone spark cluster. My understanding is spark is an
> execution engine and not a storage layer.
> Spark processes data in memory but when someone refers to a spark table
> created through sparksql(df/rdd) what exactly are they referring to?
>
> Could it be a Hive table? If yes, is it the same hive store that spark
> uses?
> Is it a table in memory? If yes, how can an external app
>
> Spark version with hadoop : spark-2.0.2-bin-hadoop2.7
>
> Thanks and appreciate your help!!
> Ajay.
>

Reply via email to