[ https://issues.apache.org/jira/browse/HIVE-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14021571#comment-14021571 ]
shanyu zhao commented on HIVE-7155: ----------------------------------- [~leftylev] Thanks for your feedback. I've attached a new version calling out megabytes in the description. > WebHCat controller job exceeds container memory limit > ----------------------------------------------------- > > Key: HIVE-7155 > URL: https://issues.apache.org/jira/browse/HIVE-7155 > Project: Hive > Issue Type: Bug > Components: WebHCat > Affects Versions: 0.13.0 > Reporter: shanyu zhao > Assignee: shanyu zhao > Attachments: HIVE-7155.1.patch, HIVE-7155.2.patch, HIVE-7155.patch > > > Submit a Hive query on a large table via WebHCat results in failure because > the WebHCat controller job is killed by Yarn since it exceeds the memory > limit (set by mapreduce.map.memory.mb, defaults to 1GB): > {code} > INSERT OVERWRITE TABLE Temp_InjusticeEvents_2014_03_01_00_00 SELECT * from > Stage_InjusticeEvents where LogTimestamp > '2014-03-01 00:00:00' and > LogTimestamp <= '2014-03-01 01:00:00'; > {code} > We could increase mapreduce.map.memory.mb to solve this problem, but this way > we are changing this setting system wise. > We need to provide a WebHCat configuration to overwrite > mapreduce.map.memory.mb when submitting the controller job. -- This message was sent by Atlassian JIRA (v6.2#6252)