Hello, I have bee running Zeppelin in yarn-client mode, and I so far I was copying required jars to the folder specified by spark.home (/opt/zeppelin/interpreter/spark/) on each cluster node. Is it possible to specify some HDFS location to load the jars from there instead? How can I configure that Thanks!
- Providing jars in HDFS David Klim
- RE: Providing jars in HDFS Munagala.Snehit