I am trying to debug a Spark Application on a cluster using a master and
several worker nodes. I have been successful at setting up the master node
and worker nodes using Spark standalone cluster manager. I downloaded the
spark folder with binaries and use the following commands to setup worker
and master nodes. These commands are executed from the spark directory.

command for launching master

./sbin/start-master.sh
command for launching worker node

./bin/spark-class org.apache.spark.deploy.worker.Worker master-URL
command for submitting application

./sbin/spark-submit --class Application --master URL ~/app.jar
Now, I would like to understand the flow of control through the Spark source
code on the worker nodes when I submit my application(I just want to use one
of the given examples that use reduce()). I am assuming I should setup Spark
on Eclipse. The Eclipse setup link on the Apache Spark website seems to be
broken. I would appreciate some guidance on setting up Spark and Eclipse to
enable stepping through Spark source code on the worker nodes.

If not Eclipse, I would be open to using some other IDE or approach that
will enable me to step through Spark source code after launching my
application.

Thanks!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Launching-Spark-Cluster-Application-through-IDE-tp22155.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to