Hi all,

Just one line of context, since last post mentioned this would help:
I'm currently writing my masters thesis (Computer Engineering) on storage
and memory in both Spark and Hadoop.

Right now I'm trying to analyze the spilling behavior of Spark, and I do not
see what I expect. Therefor, I want to be sure that I am looking at the
correct location.

If I set spark.local.dir and SPARK_LOCAL_DIRS to, for instance, ~/temp
instead of /tmp. Will this be the location where all data will be spilled
to? I assume it is, based on the description of spark.local.dir at
https://spark.apache.org/docs/latest/configuration.html:
"Directory to use for "scratch" space in Spark, including map output files
and RDDs that get stored on disk."

Thanks!



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-spilling-location-tp8471.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to