The 1st was referring to different Spark applications connecting to the
standalone cluster manager, and the 2nd one was referring to within a
single Spark application, the jobs can be scheduled using a fair scheduler.
On Thu, Nov 27, 2014 at 3:47 AM, Praveen Sripati
wrote:
> Hi,
>
> There is a
Hi,
There is a bit of inconsistent in the document. Which is the correct
statement?
`http://spark.apache.org/docs/latest/spark-standalone.html` says
The standalone cluster mode currently only supports a simple FIFO scheduler
across applications.
while `http://spark.apache.org/docs/latest/job-sc