Glad to hear that you have already supported it, that is just the thing we
are doing. And these exceptions you said doesn't conflict with hive support,
we can easily make it compatible. 

>Do you have an idea about where the connector should be developed? I don’t
think it makes sense for it to be part of Spark. That would keep complexity
in the main project and require updating Hive versions slowly. Using a
separate project would mean less code in Spark specific to one source, and 
could more easily support multiple Hive versions. Maybe we should create a
project for catalog plug-ins?

AFAIT, it is necessary to create a new project, users need to create their
own Connector according to their own needs. In our implementation of Hive on
DataSourceV2, we put the basic Partition API and Commands in the main
project,  and put a default version HiveCatalog and HiveConnector into the
external project. Users can use our project and can also implement their own
HiveConnector. Maybe this is a good way to support.

Look forward to your patch submission, we can cooperate in this area.



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to