How about running in client mode, so that the client from which it is run
becomes the driver.

Regards,
Praveen
On 9 Feb 2016 16:59, "Steve Loughran" <ste...@hortonworks.com> wrote:

>
> > On 9 Feb 2016, at 06:53, Sean Owen <so...@cloudera.com> wrote:
> >
> >
> > I think you can let YARN over-commit RAM though, and allocate more
> > memory than it actually has. It may be beneficial to let them all
> > think they have an extra GB, and let one node running the AM
> > technically be overcommitted, a state which won't hurt at all unless
> > you're really really tight on memory, in which case something might
> > get killed.
>
>
> from my test VMs
>
>       <property>
>         <description>Whether physical memory limits will be enforced for
>           containers.
>         </description>
>         <name>yarn.nodemanager.pmem-check-enabled</name>
>         <value>false</value>
>       </property>
>
>       <property>
>         <name>yarn.nodemanager.vmem-check-enabled</name>
>         <value>false</value>
>       </property>
>
>
> it does mean that a container can swap massively, hurting the performance
> of all containers around it as IO bandwidth gets soaked up —which is why
> the checks are on for shared clusters. If it's dedicated, you can overcommit

Reply via email to