Andrew,
Thanks for your answer. It validates our finding. Unfortunately, client mode
assumes that I'm running in a "privilege node". What I mean by privilege is a
node that has net access to all the workers and vice versa. This is a big
assumption to make and unreasonable in certain circumstanc
Hi Randy and Gino,
The issue is that standalone-cluster mode is not officially supported.
Please use standalone-client mode instead, i.e. specify --deploy-mode
client in spark-submit, or simply leave out this config because it defaults
to client mode.
Unfortunately, this is not currently document
I've found that the jar will be copied to the worker from hdfs fine, but it is
not added to the spark context for you. You have to know that the jar will end
up in the driver's working dir, and so you just add a the file name if the jar
to the context in your program.
In your example below, ju
in addition, jar file can be copied to driver node automatically.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/problem-about-cluster-mode-of-spark-1-0-0-tp7982p7984.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.