I know it, thanks, but it's non reliable solution.

2017-03-26 5:23 GMT+02:00 Jianfeng (Jeff) Zhang <[email protected]>:

>
> You can try to specify the namenode address for hdfs file. e.g
>
> spark.read.csv(“hdfs://localhost:9009/file”)
>
> Best Regard,
> Jeff Zhang
>
>
> From: Serega Sheypak <[email protected]>
> Reply-To: "[email protected]" <[email protected]>
> Date: Sunday, March 26, 2017 at 2:47 AM
> To: "[email protected]" <[email protected]>
> Subject: Setting Zeppelin to work with multiple Hadoop clusters when
> running Spark.
>
> Hi, I have three hadoop clusters. Each cluster has it's own NN HA
> configured and YARN.
> I want to allow user to read from ant cluster and write to any cluster.
> Also user should be able to choose where to run is spark job.
> What is the right way to configure it in Zeppelin?
>
>

Reply via email to