Hi Jeff,
Sorry for the late response.
I ran yarn-cluster mode with this setup
%spark2.conf
master yarn
spark.submit.deployMode cluster
zeppelin.pyspark.python /home/mansop/anaconda2/bin/python
spark.driver.memory 10g
I added ` log4j.logger.org.apache.zeppelin.interpreter=DEBUG` to the `
log4j
>>> I added ` log4j.logger.org.apache.zeppelin.interpreter=DEBUG` to the `
log4j_yarn_cluster.properties` file but nothing has changed, in fact the `
zeppelin-interpreter-spark2-mansop-root-zama-mlx.mlx.log` file is not
updated after running my notes
In yarn cluster mode, you should check yarn app
Another thing you can do is looking at the yarn web ui or resource manager
log. It is possible that yarn killed your driver because of your usage of
memory is out of limitation.
The following line of code seems consume large amount of memory.
aList = []
for i in range(1000):
aList.append(i*