Even when I comment out those 3 lines, I still get the same error. Did
someone solve this?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-JDBC-tp11369p13992.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
Thank you!! I can do this using saveAsTable with the schemaRDD, right?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13979.html
Sent from the Apache Spark User List mailing lis
I used the hiveContext to register the tables and the tables are still not
being found by the thrift server. Do I have to pass the hiveContext to JDBC
somehow?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-spark
I think the package does not exist because I need to change the pom file:
org.apache.spark
spark-assembly_2.10
1.0.1
pom
provided
I changed the version number to 1.1.1, yet still that causes the build
error:
Failure to find org.apache.spark:spark-assembly_2.10:pom:1.1.1 in
http
Thanks so much!
That makes complete sense. However, when I compile I get an error "package
org.apache.spark.sql.hive does not exist."
Does anyone else have this and any idea why this might be so?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-f
Hi,
I want to use the sparksql thrift server in my application and make sure
everything is loading and working. I built Spark 1.1 SNAPSHOT and ran the
thrift server using ./sbin/start-thrift-server. In my application I load
tables into schemaRDDs and I expect that the thrift-server should pick th