[ https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Chengxiang Li updated HIVE-7371: -------------------------------- Status: Patch Available (was: In Progress) > Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch] > ----------------------------------------------------------------------------- > > Key: HIVE-7371 > URL: https://issues.apache.org/jira/browse/HIVE-7371 > Project: Hive > Issue Type: Task > Components: Spark > Reporter: Xuefu Zhang > Assignee: Chengxiang Li > Attachments: HIVE-7371-Spark.1.patch > > > Currently, Spark client ships all Hive JARs, including those that Hive > depends on, to Spark cluster when a query is executed by Spark. This is not > efficient, causing potential library conflicts. Ideally, only a minimum set > of JARs needs to be shipped. This task is to identify such a set. > We should learn from current MR cluster, for which I assume only hive-exec > JAR is shipped to MR cluster. > We also need to ensure that user-supplied JARs are also shipped to Spark > cluster, in a similar fashion as MR does. > NO PRECOMMIT TESTS. This is for spark-branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)