Hi,

I am running Zeppelin on Amazon EMR spark and I am keep facing the "out of
memory" problem while loading large csv file.
The zeppelin by default has set 512 MB for driver executor and 142 MB for
two executors.
I tried to increase them by placing the following configuration params in
"zeppelin-env.sh", but it had no effect.

--conf driver-memory=6g --conf spark.executor.memory=6g

I do appreciate if you could share your comments and experience on how to
fix this.

best,
/Shahab

Reply via email to