[ https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14066697#comment-14066697 ]
Xuefu Zhang edited comment on HIVE-7371 at 7/18/14 6:27 PM: ------------------------------------------------------------ Patch committed to spark branch. Thanks to Chengxiang for the contribution! was (Author: xuefuz): Patch committed to trunk. Thanks to Chengxiang for the contribution! > Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch] > ----------------------------------------------------------------------------- > > Key: HIVE-7371 > URL: https://issues.apache.org/jira/browse/HIVE-7371 > Project: Hive > Issue Type: Task > Components: Spark > Reporter: Xuefu Zhang > Assignee: Chengxiang Li > Attachments: HIVE-7371-Spark.1.patch, HIVE-7371-Spark.2.patch, > HIVE-7371-Spark.3.patch > > > Currently, Spark client ships all Hive JARs, including those that Hive > depends on, to Spark cluster when a query is executed by Spark. This is not > efficient, causing potential library conflicts. Ideally, only a minimum set > of JARs needs to be shipped. This task is to identify such a set. > We should learn from current MR cluster, for which I assume only hive-exec > JAR is shipped to MR cluster. > We also need to ensure that user-supplied JARs are also shipped to Spark > cluster, in a similar fashion as MR does. > NO PRECOMMIT TESTS. This is for spark-branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)