RE: spark hivethriftserver problem on 1.5.0 -> 1.6.0 upgrade

2016-02-03 Thread james.gre...@baesystems.com
bles, it usually means that the HiveContext that you used with JDBC was different from the one used to create the temp table. However, in your case, you are using HiveThriftServer2.startWithContext(hiveContext). So, it will be good to provide more logs and see what happened. Thanks, Yin

RE: spark hivethriftserver problem on 1.5.0 -> 1.6.0 upgrade

2016-01-27 Thread james.gre...@baesystems.com
s that the HiveContext that you used with JDBC was different from the one used to create the temp table. However, in your case, you are using HiveThriftServer2.startWithContext(hiveContext). So, it will be good to provide more logs and see what happened. Thanks, Yin On Tue, Jan 26, 2016 at 1:33 AM, james.gre.

spark hivethriftserver problem on 1.5.0 -> 1.6.0 upgrade

2016-01-26 Thread james.gre...@baesystems.com
Hi I posted this on the user list yesterday, I am posting it here now because on further investigation I am pretty sure this is a bug: On upgrade from 1.5.0 to 1.6.0 I have a problem with the hivethriftserver2, I have this code: val hiveContext = new HiveContext(SparkContext.getOrCreate(conf

RE: new datasource

2015-11-19 Thread james.gre...@baesystems.com
/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRelation.scala#L285 Hope it helpful. Hao From: james.gre...@baesystems.com<mailto:james.gre...@baesystems.com> [mailto:james.gre...@baesystems.com] Sent: Thursday, November 19, 2015 11:14 PM T

new datasource

2015-11-19 Thread james.gre...@baesystems.com
We have written a new Spark DataSource that uses both Parquet and ElasticSearch. It is based on the existing Parquet DataSource. When I look at the filters being pushed down to buildScan I don’t get anything representing any filters based on UDFs – or for any fields generated by an explode