:
Thanks Sushrut for the reply.
Currently I have not defined spark.default.parallelism property.
Can you let me know how much should I set it to?
Regards,
Aditya Calangutkar
On Wednesday 28 September 2016 12:22 PM, Sushrut Ikhar wrote:
Try with increasing the parallelism by repartitioning and also you
may increase - spark.default.parallelism
You can also try with decreasing num-executor cores.
Basically, this happens when the executor is using quite large memory
than it asked; and yarn kills the executor.
Regards,
Sushrut Ikhar
https://about.me/sushrutikhar
<https://about.me/sushrutikhar?promo=email_sig>
On Wed, Sep 28, 2016 at 12:17 PM, Aditya
<aditya.calangut...@augmentiq.co.in
<mailto:aditya.calangut...@augmentiq.co.in>> wrote:
I have a spark job which runs fine for small data. But when data
increases it gives executor lost error.My executor and driver
memory are set at its highest point. I have also tried
increasing--conf spark.yarn.executor.memoryOverhead=600but still
not able to fix the problem. Is there any other solution to fix
the problem?