Hi Aakash,
in the cluster you need to consider the total number of executors you
are using. Please take a look in the following link
for an introduction.
https://spoddutur.github.io/spark-notes/distribution_of_executors_cores_and_memory_for_spark_application.html
regards,
Apostolos
On
Local only one JVM, runs on the host you submitted the job
${SPARK_HOME}/bin/spark-submit \
--master local[N] \
Standalone meaning using Spark own scheduler
${SPARK_HOME}/bin/spark-submit \
--master spark:// \
Where IP_ADDRESS is the host your Spark master sta