Do you specify SPARK_HOME or just using the local embedded mode of spark ?

Metin OSMAN <mos...@mixdata.com>于2018年10月4日周四 上午1:39写道:

> Hi,
>
> I have downloaded and setup zeppelin on my local Ubuntu 18.04 computer,
> and I successfully managed to open file on Azure Storage with spark
> interpreter out of the box.
>
> Then I have installed the same package on a Ubuntu 14.04 server.
> When I try running a simple spark read parquet from an azure storage
> account, I get a java.io.IOException: No FileSystem for scheme: wasbs
>
> sqlContext.read.parquet("wasbs://
> mycontai...@myacountsa.blob.core.windows.net/mypath")
>
> java.io.IOException: No FileSystem for scheme: wasbs at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2304) at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311) at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90) at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350) at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332) at
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369) at
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:350)
> at
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:348)
> at
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at scala.collection.immutable.List.foreach(List.scala:381) at
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
> at scala.collection.immutable.List.flatMap(List.scala:344) at
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:348)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178) at
> org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:559) at
> org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:543) ...
> 52 elided
>
> I copied the interpreter.json file from my local computer to the server
> but that has not changed anything.
>
> Should it be working ootb or the fact that it worked on my local computer
> may be due to some local spark configuration or environment variables ?
>
> Thank you,
> Metin
>

Reply via email to