Hi Alexey and Daniel, I'm using Spark 1.2.0 and still having the same error, as described below.
Do you have any news on this? Really appreciate your responses!!! "a Spark cluster of 1 master VM SparkV1 and 1 worker VM SparkV4 (the error is the same if I have 2 workers). They are connected without a problem now. But when I submit a job (as in https://spark.apache.org/docs/latest/quick-start.html) at the master: >spark-submit --master spark://SparkV1:7077 examples/src/main/python/pi.py it seems to run ok and returns "Pi is roughly...", but the worker has the following Error: 15/02/07 15:22:33 ERROR EndpointWriter: AssociationError [akka.tcp://sparkWorker@SparkV4:47986] <- [akka.tcp://sparkExecutor@SparkV4:46630]: Error [Shut down address: akka.tcp://sparkExecutor@SparkV4:46630] [ akka.remote.ShutDownAssociation: Shut down address: akka.tcp://sparkExecutor@SparkV4:46630 Caused by: akka.remote.transport.Transport$InvalidAssociationException: The remote system terminated the association because it is shutting down. ] More about the setup: each VM has only 4GB RAM, running Ubuntu, using spark-1.2.0, built for Hadoop 2.6.0 or 2.4.0. " -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/akka-remote-transport-Transport-InvalidAssociationException-The-remote-system-terminated-the-associan-tp20071p21607.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org