Hi all,

I'm investigating adding geospatial user-defined functions and types to Spark SQL in Spark 2.0.x. That is going rather well; I've seen how to add geospatial UDT and UDFs (and even UDAFs!).

As part of the investigation, I tried out the Thrift JDBC server, and I have encountered two general issues.

First, there are a few places in the Hive Thrift Server module which aren't general. In two places, a SessionState is cast to a HiveSessionState. Additionally, in SparkExecuteStatementOperation, some serialization code for handling DataTypes doesn't consider UserDefinedTypes.

For those issues, would a JIRA and pull request be appropriate?

My second issue was connecting existing JDBC code to the Hive JDBC connector. The code calls connection.getMetaData.getTables(). I tried various options to see that call work, and I was unable to get the list of tables back.

It looks like these tickets are related: https://issues.apache.org/jira/browse/SPARK-9686 https://issues.apache.org/jira/browse/SPARK-9333

Has there been any more work on the JDBC metadata issues?

Thanks in advance,

Jim


---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to