you should check why executor is killed. as soon as it's killed you can get
all kind of strange exceptions...
either give your executors more memory(4G is rather small for spark )
or try to decrease your input or maybe split it into more partitions in
input format
23G in lzo might expand to x?
It seems like the problem is related to —executor-cores. Is there possibly some
sort of race condition when using multiple cores per executor?
On Nov 22, 2015, at 12:38 PM, Jeremy Davis
mailto:jda...@marketshare.com>> wrote:
Hello,
I’m at a loss trying to diagnose why my spark job is failing.