[ 
https://issues.apache.org/jira/browse/HIVE-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092780#comment-16092780
 ] 

liyunzhang_intel commented on HIVE-17114:
-----------------------------------------

[~lirui]:  several questions:
1.{quote}
Spark decides the reducer task for each record by computing 
hash(key)%numReducers
{quote}

this is in hive on spark code or in spark code? can you point out detail code 
place?

2. when i view HIVE-7121, the problem mentioned in the jira description only 
relates table with bucket ?
{code}
CREATE TABLE bucket1_1(key int, value string) CLUSTERED BY (key) INTO 100 
BUCKETS;
{code}

> HoS: Possible skew in shuffling when data is not really skewed
> --------------------------------------------------------------
>
>                 Key: HIVE-17114
>                 URL: https://issues.apache.org/jira/browse/HIVE-17114
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Rui Li
>            Assignee: Rui Li
>            Priority: Minor
>         Attachments: HIVE-17114.1.patch
>
>
> Observed in HoS and may apply to other engines as well.
> When we join 2 tables on a single int key, we use the key itself as hash code 
> in {{ObjectInspectorUtils.hashCode}}:
> {code}
>       case INT:
>         return ((IntObjectInspector) poi).get(o);
> {code}
> Suppose the keys are different but are all some multiples of 10. And if we 
> choose 10 as #reducers, the shuffle will be skewed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to