Great, glad it worked out!
From: Todd Nist
Date: Thursday, February 19, 2015 at 9:19 AM
To: Silvio Fiorito
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
Subject: Re: SparkSQL + Tableau Connector
Hi Silvio,
I got this working today using your suggestion with the &qu
11, 2015 at 7:53 PM
> To: Andrew Lee
> Cc: Arush Kharbanda, "user@spark.apache.org"
> Subject: Re: SparkSQL + Tableau Connector
>
> First sorry for the long post. So back to tableau and Spark SQL, I'm
> still missing something.
>
> TL;DR
>
> To get
at is the Hive metastore mysql database if you are using
mysql as the DB for metastore.
Date: Wed, 11 Feb 2015 19:53:35 -0500
Subject: Re: SparkSQL + Tableau Connector
From: tsind...@gmail.com
To: alee...@hotmail.com
CC: ar...@sigmoidanalytics.com; user@spark.apache.org
First sorry for the long
e.org<mailto:user@spark.apache.org>"
Subject: Re: SparkSQL + Tableau Connector
First sorry for the long post. So back to tableau and Spark SQL, I'm still
missing something.
TL;DR
To get the Spark SQL Temp table associated with the metastore are there
additional steps required b
ead
> the logs since there were other activities going on on the cluster.
>
> --
> From: alee...@hotmail.com
> To: ar...@sigmoidanalytics.com; tsind...@gmail.com
> CC: user@spark.apache.org
> Subject: RE: SparkSQL + Tableau Connector
> Date: Wed, 11 Feb 201
Sorry folks, it is executing Spark jobs instead of Hive jobs. I mis-read the
logs since there were other activities going on on the cluster.
From: alee...@hotmail.com
To: ar...@sigmoidanalytics.com; tsind...@gmail.com
CC: user@spark.apache.org
Subject: RE: SparkSQL + Tableau Connector
Date: Wed
into RDD. Did I
misunderstood the purpose of Spark ThriftServer2?
Date: Wed, 11 Feb 2015 16:07:40 +0530
Subject: Re: SparkSQL + Tableau Connector
From: ar...@sigmoidanalytics.com
To: tsind...@gmail.com
CC: user@spark.apache.org
Hi
I used this, though its using a embedded driver and is not a good a
Hi
I used this, though its using a embedded driver and is not a good
approch.It works. You can configure for some other metastore type also. I
have not tried the metastore uri's.
javax.jdo.option.ConnectionURL
jdbc:derby:;databaseName=/opt/bigdata/spark-1.2.0/metastore_db;create=true
Hi Arush,
So yes I want to create the tables through Spark SQL. I have placed the
hive-site.xml file inside of the $SPARK_HOME/conf directory I thought that
was all I should need to do to have the thriftserver use it. Perhaps my
hive-site.xml is worng, it currently looks like this:
hive.met
BTW what tableau connector are you using?
On Wed, Feb 11, 2015 at 12:55 PM, Arush Kharbanda <
ar...@sigmoidanalytics.com> wrote:
> I am a little confused here, why do you want to create the tables in
> hive. You want to create the tables in spark-sql, right?
>
> If you are not able to find the s
I am a little confused here, why do you want to create the tables in hive.
You want to create the tables in spark-sql, right?
If you are not able to find the same tables through tableau then thrift is
connecting to a diffrent metastore than your spark-shell.
One way to specify a metstore to thri
d data using SQL, such as:
>>
>> create temporary table people using org.apache.spark.sql.json options
>> (path 'examples/src/main/resources/people.json’)
>> cache table people
>>
>> create temporary table users using org.apache.spark.sql.parquet options
>
g<mailto:user@spark.apache.org>"
Subject: Re: SparkSQL + Tableau Connector
Hi Silvio,
Ah, I like that, there is a section in Tableau for "Initial SQL" to be executed
upon connecting this would fit well there. I guess I will need to issue a
collect(), coalesce(1,true).save
Arush,
As for #2 do you mean something like this from the docs:
// sc is an existing SparkContext.val sqlContext = new
org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value
STRING)")sqlContext.sql("LOAD DATA LOCAL INPATH
'examples/src/main/resour
options
> (path 'examples/src/main/resources/people.json’)
> cache table people
>
> create temporary table users using org.apache.spark.sql.parquet options
> (path 'examples/src/main/resources/users.parquet’)
> cache table users
>
> From: Todd Nist
> Date
Arush,
Thank you will take a look at that approach in the morning. I sort of
figured the answer to #1 was NO and that I would need to do 2 and 3 thanks
for clarifying it for me.
-Todd
On Tue, Feb 10, 2015 at 5:24 PM, Arush Kharbanda wrote:
> 1. Can the connector fetch or query schemaRDD's sa
org<mailto:user@spark.apache.org>"
Subject: SparkSQL + Tableau Connector
Hi,
I'm trying to understand how and what the Tableau connector to SparkSQL is able
to access. My understanding is it needs to connect to the thriftserver and I
am not sure how or if it exposes parquet, js
1. Can the connector fetch or query schemaRDD's saved to Parquet or JSON
files? NO
2. Do I need to do something to expose these via hive / metastore other
than creating a table in hive? Create a table in spark sql to expose via
spark sql
3. Does the thriftserver need to be configured to expose t
Hi,
I'm trying to understand how and what the Tableau connector to SparkSQL is
able to access. My understanding is it needs to connect to the
thriftserver and I am not sure how or if it exposes parquet, json,
schemaRDDs, or does it only expose schemas defined in the metastore / hive.
For exampl
19 matches
Mail list logo