As in Hadoop 2.5.1 of MapR 4.1.0, virtual memory checker is disabled while
physical memory checker is enabled by default.

Since on Centos/RHEL 6 there are aggressive allocation of virtual memory
due to OS behavior, you should disable virtual memory checker or increase
yarn.nodemanager.vmem-pmem-ratio to a relatively larger value.

https://www.mapr.com/blog/best-practices-yarn-resource-management

On Mon, Mar 14, 2016 at 3:36 AM, Steve Loughran <ste...@hortonworks.com>
wrote:

>
> On 11 Mar 2016, at 23:01, Alexander Pivovarov <apivova...@gmail.com>
> wrote:
>
> Forgot to mention. To avoid unnecessary container termination add the
> following setting to yarn
>
> yarn.nodemanager.vmem-check-enabled = false
>
>
> That can kill performance on a shared cluster: if your container code
> starts to swap, performance of everything suffers. A good ops team will
> decline such a request in a multi-tenant cluster.
>
> In such a cluster: aask for the amount of memory you think you actually
> need, and let the scheduler find space for it. This not only stops you
> killing cluster performance, it means that on a busy cluster, you get the
> same memory and CPU is you would on an idle one: so more consistent
> workloads. (and nobody else swapping your code out)
>
> regarding the numbers, people need to remember that if they are running
> python work in the cluster, they need to include more headroom.
>
> if you are going to turn off memory monitoring, have a play
> with yarn.nodemanager.pmem-check-enabled=false too
>

Reply via email to