[ 
https://issues.apache.org/jira/browse/HIVE-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14015954#comment-14015954
 ] 

shanyu zhao commented on HIVE-7155:
-----------------------------------

[~thejas] Yes, it's a good idea to make templeton.mapper.memory.mb defaults to 
empty, in which case we don't set mapreduce.map.memory.mb when submitting the 
controller job. This is safer for users that are not aware of this option yet 
flexible for people with the needs to change this configuration. I will modify 
the patch accordingly. Thanks for the suggestion!

> WebHCat controller job exceeds container memory limit
> -----------------------------------------------------
>
>                 Key: HIVE-7155
>                 URL: https://issues.apache.org/jira/browse/HIVE-7155
>             Project: Hive
>          Issue Type: Bug
>          Components: WebHCat
>    Affects Versions: 0.13.0
>            Reporter: shanyu zhao
>            Assignee: shanyu zhao
>         Attachments: HIVE-7155.patch
>
>
> Submit a Hive query on a large table via WebHCat results in failure because 
> the WebHCat controller job is killed by Yarn since it exceeds the memory 
> limit (set by mapreduce.map.memory.mb, defaults to 1GB):
> {code}
>  INSERT OVERWRITE TABLE Temp_InjusticeEvents_2014_03_01_00_00 SELECT * from 
> Stage_InjusticeEvents where LogTimestamp > '2014-03-01 00:00:00' and 
> LogTimestamp <= '2014-03-01 01:00:00';
> {code}
> We could increase mapreduce.map.memory.mb to solve this problem, but this way 
> we are changing this setting system wise.
> We need to provide a WebHCat configuration to overwrite 
> mapreduce.map.memory.mb when submitting the controller job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to