Thanks Michael.  I will give it a try.


On Thu, Feb 12, 2015 at 6:00 PM, Michael Armbrust <mich...@databricks.com>
wrote:

> You can start a JDBC server with an existing context.  See my answer here:
> http://apache-spark-user-list.1001560.n3.nabble.com/Standard-SQL-tool-access-to-SchemaRDD-td20197.html
>
> On Thu, Feb 12, 2015 at 7:24 AM, Todd Nist <tsind...@gmail.com> wrote:
>
>> I have a question with regards to accessing SchemaRDD’s and Spark SQL
>> temp tables via the thrift server.  It appears that a SchemaRDD when
>> created is only available in the local namespace / context and are
>> unavailable to external services accessing Spark through thrift server via
>> ODBC; is this correct?  Does the same apply to temp tables?
>>
>> If we process data on Spark how is it exposed to the thrift server for
>> access by third party BI applications via ODBC?  Dose one need to have two
>> spark context, one for processing, then dump it to metastore from which a
>> third party application can fetch the data or is it possible to expose the
>> resulting SchemaRDD via the thrift server?
>>
>> I am trying to do this with Tableau, Spark SQL Connector.  From what I
>> can see I need the spark context for processing and then dump to
>> metastore.  Is it possible to access the resulting SchemaRDD from doing
>> something like this:
>>
>> create temporary table test
>> using org.apache.spark.sql.json
>> options (path ‘/data/json/*');
>>
>> cache table test;
>>
>> I am using Spark 1.2.1.  If not available now will it be in 1.3.x? Or is
>> the only way to achieve this is store into the metastore and does the imply
>> hive.
>>
>> -Todd
>>
>
>

Reply via email to