Great, glad it worked out!
From: Todd Nist
Date: Thursday, February 19, 2015 at 9:19 AM
To: Silvio Fiorito
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
Subject: Re: SparkSQL + Tableau Connector
Hi Silvio,
I got this working today using your suggestion with the &qu
11, 2015 at 7:53 PM
> To: Andrew Lee
> Cc: Arush Kharbanda, "user@spark.apache.org"
> Subject: Re: SparkSQL + Tableau Connector
>
> First sorry for the long post. So back to tableau and Spark SQL, I'm
> still missing something.
>
> TL;DR
>
> To get
at is the Hive metastore mysql database if you are using
mysql as the DB for metastore.
Date: Wed, 11 Feb 2015 19:53:35 -0500
Subject: Re: SparkSQL + Tableau Connector
From: tsind...@gmail.com
To: alee...@hotmail.com
CC: ar...@sigmoidanalytics.com; user@spark.apache.org
First sorry for the long
e.org<mailto:user@spark.apache.org>"
Subject: Re: SparkSQL + Tableau Connector
First sorry for the long post. So back to tableau and Spark SQL, I'm still
missing something.
TL;DR
To get the Spark SQL Temp table associated with the metastore are there
additional steps required b
rk ThriftServer2 => HiveServer2
>
> It's talking to Tableau Desktop 8.3. Interestingly, when I query a Hive
> table, it still invokes Hive queries to HiveServer2 which is running MR or
> Tez engine. Is this expected?
>
> I thought it should at least use the catalyst engine
Sorry folks, it is executing Spark jobs instead of Hive jobs. I mis-read the
logs since there were other activities going on on the cluster.
From: alee...@hotmail.com
To: ar...@sigmoidanalytics.com; tsind...@gmail.com
CC: user@spark.apache.org
Subject: RE: SparkSQL + Tableau Connector
Date: Wed
into RDD. Did I
misunderstood the purpose of Spark ThriftServer2?
Date: Wed, 11 Feb 2015 16:07:40 +0530
Subject: Re: SparkSQL + Tableau Connector
From: ar...@sigmoidanalytics.com
To: tsind...@gmail.com
CC: user@spark.apache.org
Hi
I used this, though its using a embedded driver and is not a good a
Hi
I used this, though its using a embedded driver and is not a good
approch.It works. You can configure for some other metastore type also. I
have not tried the metastore uri's.
javax.jdo.option.ConnectionURL
jdbc:derby:;databaseName=/opt/bigdata/spark-1.2.0/metastore_db;create=true
Hi Arush,
So yes I want to create the tables through Spark SQL. I have placed the
hive-site.xml file inside of the $SPARK_HOME/conf directory I thought that
was all I should need to do to have the thriftserver use it. Perhaps my
hive-site.xml is worng, it currently looks like this:
hive.met
BTW what tableau connector are you using?
On Wed, Feb 11, 2015 at 12:55 PM, Arush Kharbanda <
ar...@sigmoidanalytics.com> wrote:
> I am a little confused here, why do you want to create the tables in
> hive. You want to create the tables in spark-sql, right?
>
> If you are not able to find the s
I am a little confused here, why do you want to create the tables in hive.
You want to create the tables in spark-sql, right?
If you are not able to find the same tables through tableau then thrift is
connecting to a diffrent metastore than your spark-shell.
One way to specify a metstore to thri
'examples/src/main/resources/json/*')
> > ;
> Time taken: 0.34 seconds
>
> spark-sql> select * from people;
> NULLMichael
> 30 Andy
> 19 Justin
> NULLMichael
> 30 Andy
> 19 Justin
> Time taken: 0.576 seconds
>
> F
g<mailto:user@spark.apache.org>"
Subject: Re: SparkSQL + Tableau Connector
Hi Silvio,
Ah, I like that, there is a section in Tableau for "Initial SQL" to be executed
upon connecting this would fit well there. I guess I will need to issue a
collect(), coalesce(1,true).save
Arush,
As for #2 do you mean something like this from the docs:
// sc is an existing SparkContext.val sqlContext = new
org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value
STRING)")sqlContext.sql("LOAD DATA LOCAL INPATH
'examples/src/main/resour
Hi Silvio,
Ah, I like that, there is a section in Tableau for "Initial SQL" to be
executed upon connecting this would fit well there. I guess I will need to
issue a collect(), coalesce(1,true).saveAsTextFile(...) or use
repartition(1), as the file currently is being broken into multiple parts.
Arush,
Thank you will take a look at that approach in the morning. I sort of
figured the answer to #1 was NO and that I would need to do 2 and 3 thanks
for clarifying it for me.
-Todd
On Tue, Feb 10, 2015 at 5:24 PM, Arush Kharbanda wrote:
> 1. Can the connector fetch or query schemaRDD's sa
Hi Todd,
What you could do is run some SparkSQL commands immediately after the Thrift
server starts up. Or does Tableau have some init SQL commands you could run?
You can actually load data using SQL, such as:
create temporary table people using org.apache.spark.sql.json options (path
'exampl
1. Can the connector fetch or query schemaRDD's saved to Parquet or JSON
files? NO
2. Do I need to do something to expose these via hive / metastore other
than creating a table in hive? Create a table in spark sql to expose via
spark sql
3. Does the thriftserver need to be configured to expose t
18 matches
Mail list logo