@yahoo-inc.com>>
Cc: "u...@spark.incubator.apache.org<mailto:u...@spark.incubator.apache.org>"
mailto:u...@spark.incubator.apache.org>>,
alexandria1101
mailto:alexandria.shea...@gmail.com>>
Subject: Re: Table not found: using jdbc console to query sparksql hive
It sort of depends on the definition of efficiently. From a work flow
perspective I would agree but from an I/O perspective, wouldn’t there be the
same multi-pass from the standpoint of the Hive context needing to push the
data into HDFS? Saying this, if you’re pushing the data into HDFS and t
Thank you!! I can do this using saveAsTable with the schemaRDD, right?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13979.html
Sent from the Apache Spark User List mailing lis
Hi Denny,
There is a related question by the way.
I have a program that reads in a stream of RDD¹s, each of which is to be
loaded into a hive table as one partition. Currently I do this by first
writing the RDD¹s to HDFS and then loading them to hive, which requires
multiple passes of HDFS I/O an
Actually, when registering the table, it is only available within the sc
context you are running it in. For Spark 1.1, the method name is changed to
RegisterAsTempTable to better reflect that.
The Thrift server process runs under a different process meaning that it cannot
see any of the tables
I used the hiveContext to register the tables and the tables are still not
being found by the thrift server. Do I have to pass the hiveContext to JDBC
somehow?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-spark
You need to run mvn install so that the package you built is put into the
local maven repo. Then when compiling your own app (with the right
dependency specified), the package will be retrieved.
On 9/9/14, 8:16 PM, "alexandria1101" wrote:
>I think the package does not exist because I need to c
I think the package does not exist because I need to change the pom file:
org.apache.spark
spark-assembly_2.10
1.0.1
pom
provided
I changed the version number to 1.1.1, yet still that causes the build
error:
Failure to find org.apache.spark:spark-assembly_2.10:pom:1.1.1 in
http
Thanks so much!
That makes complete sense. However, when I compile I get an error "package
org.apache.spark.sql.hive does not exist."
Does anyone else have this and any idea why this might be so?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-f
Your tables were registered in the SqlContext, whereas the thrift server
works with HiveContext. They seem to be in two different worlds today.
On 9/9/14, 5:16 PM, "alexandria1101" wrote:
>Hi,
>
>I want to use the sparksql thrift server in my application and make sure
>everything is loading an
10 matches
Mail list logo