Zeppelin has the Scala interpreter assigned as the default for Spark
notebooks. This default setting creates an additional step if you are to
write code using PySpark. You will need to insert a %pyspark at the
beginning of each row of the notebook, for Zeppelin to understand that this
is PySpark code.

I modified the 'zeppelin.interpreters' property in zeppelin-site.xml and
restarted the Zeppelin process. However, I do not see the default
interpreter change. Am I missing something?

Thanks for any help​

.

Reply via email to