Hi,

I ran a spark job.  Each executor is allocated a chuck of input data.  For
the executor with a small chunk of input data, the performance is reasonable
good.  But for the executor with a large chunk of input data, the
performance is not good.  How can I tune Spark configuration parameters to
have better performance for large input data?  Thanks.


Ey-Chih Chow 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-improve-performance-of-spark-job-with-large-input-to-executor-tp21856.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to