Thanks for your help; problem resolved.
As pointed out by Andrew and Meethu, I needed to use
spark://vmsparkwin1:7077 rather than the equivalent spark://10.1.3.7:7077 in
the spark-submit command.
It appears that the argument in the --master option for the spark-submit
must match exactly (not just
Hi,
Instead of spark://10.1.3.7:7077 use spark://vmsparkwin1:7077 try this
$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
> spark://vmsparkwin1:7077 --executor-memory 1G --total-executor-cores 2
> ./lib/spark-examples-1.0.0-hadoop2.2.0.jar 10
Thanks & Regards,
Meethu
I think I know what is happening to you. I've looked some into this just this
week, and so its fresh in my brain :) hope this helps.
When no workers are known to the master, iirc, you get this message.
I think this is how it works.
1) You start your master
2) You start a slave, and give it m
Hi ranjanp,
If you go to the master UI (masterIP:8080), what does the first line say?
Verify that this is the same as what you expect. Another thing is that
--master in spark submit overwrites whatever you set MASTER to, so the
environment variable won't actually take effect. Another obvious thing