[ 
https://issues.apache.org/jira/browse/HIVE-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260763#comment-14260763
 ] 

Dong Chen commented on HIVE-7685:
---------------------------------

Brock, thanks for your quick feedback! 
Yes, it is passed down, and verified in the Hive + Parquet integration env: 
1. check the value in log; 
2. use about 2G data with 5 partition to insert. It works fine with this change 
and OOM without the change.

Since the check was done several days ago, I will double check it today and see 
the result.

> Parquet memory manager
> ----------------------
>
>                 Key: HIVE-7685
>                 URL: https://issues.apache.org/jira/browse/HIVE-7685
>             Project: Hive
>          Issue Type: Improvement
>          Components: Serializers/Deserializers
>            Reporter: Brock Noland
>            Assignee: Dong Chen
>         Attachments: HIVE-7685.1.patch, HIVE-7685.1.patch.ready, 
> HIVE-7685.patch, HIVE-7685.patch.ready
>
>
> Similar to HIVE-4248, Parquet tries to write large very large "row groups". 
> This causes Hive to run out of memory during dynamic partitions when a 
> reducer may have many Parquet files open at a given time.
> As such, we should implement a memory manager which ensures that we don't run 
> out of memory due to writing too many row groups within a single JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to