Hi Michael,

You could either set spark.local.dir through spark conf or java.io.tmpdir
system property.

Regards,
Keith.

http://keith-chapman.com

On Mon, Mar 19, 2018 at 9:59 AM, Michael Shtelma <mshte...@gmail.com> wrote:

> Hi everybody,
>
> I am running spark job on yarn, and my problem is that the blockmgr-*
> folders are being created under
> /tmp/hadoop-msh/nm-local-dir/usercache/msh/appcache/application_id/*
> The size of this folder can grow to a significant size and does not
> really fit into /tmp file system for one job, which makes a real
> problem for my installation.
> I have redefined hadoop.tmp.dir in core-site.xml and
> yarn.nodemanager.local-dirs in yarn-site.xml pointing to other
> location and expected that the block manager will create the files
> there and not under /tmp, but this is not the case. The files are
> created under /tmp.
>
> I am wondering if there is a way to make spark not use /tmp at all and
> configure it to create all the files somewhere else ?
>
> Any assistance would be greatly appreciated!
>
> Best,
> Michael
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to