Hi, 

I need to know how to run a self-contained Spark app  (3 python files) in a
Spark standalone cluster. Can I move the .py files to the cluster, or should
I store them locally, on HDFS or S3? I tried the following locally and on S3
with a zip of my .py files as suggested  here
<http://spark.apache.org/docs/latest/submitting-applications.html>  : 

./bin/spark-submit --master
spark://ec2-54-51-23-172.eu-west-1.compute.amazonaws.com:5080    --py-files
s3n://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY@mubucket//weather_predict.zip

But get: “Error: Must specify a primary resource (JAR or Python file)”

Best, 
Kevin 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Run-a-self-contained-Spark-app-on-a-Spark-standalone-cluster-tp26753.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to