On 10/16/14 10:48 PM, neeraj wrote:
1. I'm trying to use Spark SQL as data source.. is it possible?
Unfortunately Spark SQL ODBC/JDBC support are based on the Thrift
server, so at least you need HDFS and a working Hive Metastore instance
(used to persist catalogs) to make things work.
2. Plea
1. I'm trying to use Spark SQL as data source.. is it possible?
2. Please share the link of ODBC/ JDBC drivers at databricks.. i'm not able
to find the same.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/YARN-deployment-of-Spark-and-Thrift-JDBC-server-tp
On 10/16/14 12:44 PM, neeraj wrote:
I would like to reiterate that I don't have Hive installed on the Hadoop
cluster.
I have some queries on following comment from Cheng Lian-2:
"The Thrift server is used to interact with existing Hive data, and thus
needs Hive Metastore to access Hive catalog.
I would like to reiterate that I don't have Hive installed on the Hadoop
cluster.
I have some queries on following comment from Cheng Lian-2:
"The Thrift server is used to interact with existing Hive data, and thus
needs Hive Metastore to access Hive catalog. In your case, you need to build
Spark
On 10/14/14 7:31 PM, Neeraj Garg02 wrote:
Hi All,
I’ve downloaded and installed Apache Spark 1.1.0 pre-built for Hadoop
2.4.
Now, I want to test two features of Spark:
|1.|*YARN deployment* : As per my understanding, I need to modify
“spark-defaults.conf” file with the settings mentioned a