[
https://issues.apache.org/jira/browse/HIVE-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14015929#comment-14015929
]
Thejas M Nair commented on HIVE-7155:
-------------------------------------
I am thinking of this in terms of number of config parameters a user has to
think about when getting started (the fewer that needs tweaking the better).
Also, many recommendations that I have seen suggest setting the
yarn.scheduler.minimum-allocation-mb to be equal to mapreduce.map.memory.mb.
Using a smaller value for templeton tasks is not going to help.
I think an alternative to enable this (minimal configuration for most users),
is to not set a default value for templeton.mapper.memory.mb. If the value of
templeton.mapper.memory.mb is empty, then use mapreduce.map.memory.mb. Users
can set templeton.mapper.memory.mb to get a higher or lower value of
mapreduce.map.memory.mb for the launcher task.
> WebHCat controller job exceeds container memory limit
> -----------------------------------------------------
>
> Key: HIVE-7155
> URL: https://issues.apache.org/jira/browse/HIVE-7155
> Project: Hive
> Issue Type: Bug
> Components: WebHCat
> Affects Versions: 0.13.0
> Reporter: shanyu zhao
> Assignee: shanyu zhao
> Attachments: HIVE-7155.patch
>
>
> Submit a Hive query on a large table via WebHCat results in failure because
> the WebHCat controller job is killed by Yarn since it exceeds the memory
> limit (set by mapreduce.map.memory.mb, defaults to 1GB):
> {code}
> INSERT OVERWRITE TABLE Temp_InjusticeEvents_2014_03_01_00_00 SELECT * from
> Stage_InjusticeEvents where LogTimestamp > '2014-03-01 00:00:00' and
> LogTimestamp <= '2014-03-01 01:00:00';
> {code}
> We could increase mapreduce.map.memory.mb to solve this problem, but this way
> we are changing this setting system wise.
> We need to provide a WebHCat configuration to overwrite
> mapreduce.map.memory.mb when submitting the controller job.
--
This message was sent by Atlassian JIRA
(v6.2#6252)