the logs i pasted are from worker logs only,
spark does have permission to write into /opt, its not like the worker is
not able to startit runs perfectly for days, but then abruptly dies.
and its not always this machine, sometimes its some other machine. It
happens once in a while, but wh
Can you dig a bit more in the worker logs? Also make sure that spark has
permission to write to /opt/ on that machine as its one machine always
throwing up.
Thanks
Best Regards
On Sat, Jul 11, 2015 at 11:18 PM, gaurav sharma
wrote:
> Hi All,
>
> I am facing this issue in my production environme
Hi All,
I am facing this issue in my production environment.
My worker dies by throwing this exception.
But i see the space is available on all the partitions on my disk
I did NOT see any abrupt increase in DIsk IO, which might have choked the
executor to write on to the stderr file.
But still m