Hi Fabrizio,
At the moment I think zeppelin does not support running spark jobs in
cluster mode. But in fact K8s mode simulates cluster mode. Because the
Zeppelin interpreter is already started as a pod in K8s, as a manual
Spark submit execution would do in cluster mode.
Spark-submit is call
Thank you Philipp, for your answer.
interpreter.sh is the shell script which is run by the Zeppelin Server and, in
particular, the following Line that you highlighted starts the interpreter in
CLUSTER MODE in my case:
INTERPRETER_RUN_COMMAND+=("${SPARK_SUBMIT}" "--class" "${ZEPPELIN_SERVER}"
"