bles, it usually means that the HiveContext that you used with JDBC was
different from the one used to create the temp table. However, in your case,
you are using HiveThriftServer2.startWithContext(hiveContext). So, it will be
good to provide more logs and see what happened.
Thanks,
Yin
s that the HiveContext that you used with JDBC was
different from the one used to create the temp table. However, in your case,
you are using HiveThriftServer2.startWithContext(hiveContext). So, it will be
good to provide more logs and see what happened.
Thanks,
Yin
On Tue, Jan 26, 2016 at 1:33 AM,
james.gre.
Hi
I posted this on the user list yesterday, I am posting it here now because on
further investigation I am pretty sure this is a bug:
On upgrade from 1.5.0 to 1.6.0 I have a problem with the hivethriftserver2, I
have this code:
val hiveContext = new HiveContext(SparkContext.getOrCreate(conf
/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRelation.scala#L285
Hope it helpful.
Hao
From: james.gre...@baesystems.com<mailto:james.gre...@baesystems.com>
[mailto:james.gre...@baesystems.com]
Sent: Thursday, November 19, 2015 11:14 PM
T
We have written a new Spark DataSource that uses both Parquet and
ElasticSearch. It is based on the existing Parquet DataSource. When I look
at the filters being pushed down to buildScan I don’t get anything representing
any filters based on UDFs – or for any fields generated by an explode