Assuming your using Sparks Standalone Cluster manager it has by default FIFO
>From the docs -  /By default, applications submitted to the standalone mode
cluster will run in FIFO (first-in-first-out) order, and each application
will try to use all available nodes./  Link -
http://spark.apache.org/docs/latest/job-scheduling.html#scheduling-across-applications



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-implement-a-scheduling-algorithm-in-Spark-tp27848p27854.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to