Hi,
I have Hive in development, I want to use it in Spark. Spark-SQL document
says the following
/
 Users who do not have an existing Hive deployment can still create a
HiveContext. When not configured by the hive-site.xml, the context
automatically creates metastore_db and warehouse in the current directory./

So I have existing hive set up and configured, how would I be able to use
the same in Spark?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to