Hi there,

I have some traces from my master and some workers where for some reason,
the ./work directory of an application can not be created on the workers.
There is also an issue with the master's temp directory creation.

master logs: http://pastebin.com/v3NCzm0u
worker's logs: http://pastebin.com/Ninkscnx

It seems that some of the executors can create the directories, but as some
others are repetitively failing, the job ends up failing. Shouldn't spark
manage to keep working with a smallest number of executors instead of
failing?





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Directory-creation-failed-leads-to-job-fail-should-it-tp23531.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to