Hi all,

I had a streaming application and midway through things decided to up the
executor memory. I spent a long time launching like this:

~/spark-1.2.0-bin-cdh4/bin/spark-submit --class StreamingTest
--executor-memory 2G --master...

and observing the executor memory is still at old 512 setting

I was about to ask if this is a bug when I decided to delete the
checkpoints. Sure enough the setting took after that.

So my question is -- why is it required to remove checkpoints to increase
memory allowed on an executor? This seems pretty un-intuitive to me.

Thanks for any insights.

Reply via email to