Re: Setting Zeppelin to work with multiple Hadoop clusters when running Spark.

2017-03-26 Thread Serega Sheypak
nder one jvm classpath. Only one > default configuration will be used. > > > Best Regard, > Jeff Zhang > > > From: Serega Sheypak > Reply-To: "users@zeppelin.apache.org" > Date: Sunday, March 26, 2017 at 7:47 PM > To: "users@zeppelin.apache.org"

Re: Setting Zeppelin to work with multiple Hadoop clusters when running Spark.

2017-03-26 Thread Jianfeng (Jeff) Zhang
ilto:users@zeppelin.apache.org>> Subject: Re: Setting Zeppelin to work with multiple Hadoop clusters when running Spark. I know it, thanks, but it's non reliable solution. 2017-03-26 5:23 GMT+02:00 Jianfeng (Jeff) Zhang mailto:jzh...@hortonworks.com>>: You can try to specify

Re: Setting Zeppelin to work with multiple Hadoop clusters when running Spark.

2017-03-26 Thread Serega Sheypak
I know it, thanks, but it's non reliable solution. 2017-03-26 5:23 GMT+02:00 Jianfeng (Jeff) Zhang : > > You can try to specify the namenode address for hdfs file. e.g > > spark.read.csv(“hdfs://localhost:9009/file”) > > Best Regard, > Jeff Zhang > > > From: Serega Sheypak > Reply-To: "users@zep

Re: Setting Zeppelin to work with multiple Hadoop clusters when running Spark.

2017-03-25 Thread Jianfeng (Jeff) Zhang
You can try to specify the namenode address for hdfs file. e.g spark.read.csv("hdfs://localhost:9009/file") Best Regard, Jeff Zhang From: Serega Sheypak mailto:serega.shey...@gmail.com>> Reply-To: "users@zeppelin.apache.org" mailto:users@zeppelin.apache.org>>