[ https://issues.apache.org/jira/browse/HIVE-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15631053#comment-15631053 ]
Rui Li commented on HIVE-15104: ------------------------------- We need to use HiveKey because it holds the proper hash code to be used for partitioning. MR also uses HiveKey, but in OutputCollector, seems it only serializes the BytesWritable part. [~wenli], is this what you mean? I suspect we'll need help from Spark if we want to do something similar. > Hive on Spark generate more shuffle data than hive on mr > -------------------------------------------------------- > > Key: HIVE-15104 > URL: https://issues.apache.org/jira/browse/HIVE-15104 > Project: Hive > Issue Type: Bug > Components: Spark > Affects Versions: 1.2.1 > Reporter: wangwenli > Assignee: Aihua Xu > > the same sql, running on spark and mr engine, will generate different size > of shuffle data. > i think it is because of hive on mr just serialize part of HiveKey, but hive > on spark which using kryo will serialize full of Hivekey object. > what is your opionion? -- This message was sent by Atlassian JIRA (v6.3.4#6332)