Hi,

Looks like the Spark client (SparkClientImpl class) submits Sparks jobs to the 
YARN cluster by forking a process and kicking off spark-submit script. Are we 
provisioning new containers every time we submit a job? There could be a perf 
hit by doing that.

Thanks.

Reply via email to