You can try to specify the namenode address for hdfs file. e.g
spark.read.csv("hdfs://localhost:9009/file")
Best Regard,
Jeff Zhang
From: Serega Sheypak <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>"
<[email protected]<mailto:[email protected]>>
Date: Sunday, March 26, 2017 at 2:47 AM
To: "[email protected]<mailto:[email protected]>"
<[email protected]<mailto:[email protected]>>
Subject: Setting Zeppelin to work with multiple Hadoop clusters when running
Spark.
Hi, I have three hadoop clusters. Each cluster has it's own NN HA configured
and YARN.
I want to allow user to read from ant cluster and write to any cluster. Also
user should be able to choose where to run is spark job.
What is the right way to configure it in Zeppelin?