Hi, After reviewing makeOffer and launchTasks in CoarseGrainedSchedulerBackend I came to the following conclusion:
Scheduling in Spark relies on cores only (not memory), i.e. the number of tasks Spark can run on an executor is constrained by the number of cores available only. When submitting Spark application for execution both -- memory and cores -- can be specified explicitly. Would you agree? Do I miss anything important? I was very surprised when I found it out as I thought that memory would also have been a limiting factor. Pozdrawiam, Jacek Laskowski ---- https://medium.com/@jaceklaskowski/ Mastering Apache Spark http://bit.ly/mastering-apache-spark Follow me at https://twitter.com/jaceklaskowski --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org