Hi Aakash,

in the cluster you need to consider the total number of executors you are using. Please take a look in the following link

for an introduction.


https://spoddutur.github.io/spark-notes/distribution_of_executors_cores_and_memory_for_spark_application.html


regards,

Apostolos




On 14/09/2018 11:21 πμ, Aakash Basu wrote:
Hi,

What is the Spark cluster equivalent of standalone's local[N]. I mean, the value we set as a parameter of local as N, which parameter takes it in the cluster mode?

Thanks,
Aakash.

--
Apostolos N. Papadopoulos, Associate Professor
Department of Informatics
Aristotle University of Thessaloniki
Thessaloniki, GREECE
tel: ++0030312310991918
email: papad...@csd.auth.gr
twitter: @papadopoulos_ap
web: http://datalab.csd.auth.gr/~apostol


---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to