[ https://issues.apache.org/jira/browse/HIVE-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203332#comment-16203332 ]
Rui Li commented on HIVE-15104: ------------------------------- [~xuefuz], we need to locate the jar on Hive side, before we call spark-submit. I made Hive include it in the {{HIVE_HOME/lib}} directory. I guess we can find the path to hive-exec.jar (which is also under lib) and search for the registrator jar under the same path (or relative ones). But that will totally depend on how Hive is installed. > Hive on Spark generate more shuffle data than hive on mr > -------------------------------------------------------- > > Key: HIVE-15104 > URL: https://issues.apache.org/jira/browse/HIVE-15104 > Project: Hive > Issue Type: Bug > Components: Spark > Affects Versions: 1.2.1 > Reporter: wangwenli > Assignee: Rui Li > Attachments: HIVE-15104.1.patch, HIVE-15104.2.patch, > HIVE-15104.3.patch, HIVE-15104.4.patch, HIVE-15104.5.patch, > HIVE-15104.5.patch, TPC-H 100G.xlsx > > > the same sql, running on spark and mr engine, will generate different size > of shuffle data. > i think it is because of hive on mr just serialize part of HiveKey, but hive > on spark which using kryo will serialize full of Hivekey object. > what is your opionion? -- This message was sent by Atlassian JIRA (v6.4.14#64029)