[ 
https://issues.apache.org/jira/browse/HIVE-23354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sherman updated HIVE-23354:
--------------------------------
    Description: 
[https://github.com/apache/hive/blob/cdd55aa319a3440963a886ebfff11cd2a240781d/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L1952-L2010]
 compareTempOrDuplicateFiles uses a combination of attemptId and fileSize to 
determine which file(s) to keep.
 I've seen instances where this function throws an exception due to the fact 
that the newer attemptId file size is less than the older attemptId (thus 
failing the query).
 I think this assumption is faulty, due to various factors such as file 
compression and the order in which values are written. It may be prudent to 
trust that the newest attemptId is in fact the best choice.

  was:
https://github.com/apache/hive/blob/cdd55aa319a3440963a886ebfff11cd2a240781d/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L1952-L2010
compareTempOrDuplicateFiles uses a combination of attemptId and fileSize to 
determine which file(s) to keep.
I've seen instances where this function throws an exception due to the fact 
that the newer attemptId file size is less than the older attemptId (thus 
failing the query).
I think this assumption is faulty, due to various factors such as file 
compression and the order in which values are written. It may be prudent to 
trust that the newest attemptId is in
fact the best choice.


> Remove file size sanity checking from compareTempOrDuplicateFiles
> -----------------------------------------------------------------
>
>                 Key: HIVE-23354
>                 URL: https://issues.apache.org/jira/browse/HIVE-23354
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2
>            Reporter: John Sherman
>            Assignee: John Sherman
>            Priority: Major
>
> [https://github.com/apache/hive/blob/cdd55aa319a3440963a886ebfff11cd2a240781d/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L1952-L2010]
>  compareTempOrDuplicateFiles uses a combination of attemptId and fileSize to 
> determine which file(s) to keep.
>  I've seen instances where this function throws an exception due to the fact 
> that the newer attemptId file size is less than the older attemptId (thus 
> failing the query).
>  I think this assumption is faulty, due to various factors such as file 
> compression and the order in which values are written. It may be prudent to 
> trust that the newest attemptId is in fact the best choice.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to