I think you just hit https://issues.apache.org/jira/browse/SPARK-15899

Could you try 2.0.1?

On Tue, Oct 4, 2016 at 7:52 AM, Denis Bolshakov <bolshakov.de...@gmail.com>
wrote:

> I think you are wrong with port for hdfs file, as I remember default value
> is 8020, and not 9000.
>
> 4 Окт 2016 г. 17:29 пользователь "Hafiz Mujadid" <hafizmujadi...@gmail.com>
> написал:
>
> Hi,
>>
>> I am trying example of structured streaming in spark using following
>> piece of code,
>>
>> val spark = SparkSession
>> .builder
>> .appName("testingSTructuredQuery")
>> .master("local")
>> .getOrCreate()
>> import spark.implicits._
>> val userSchema = new StructType()
>> .add("name", "string").add("age", "integer")
>>
>> val csvDF = spark
>> .readStream
>> .option("sep", ",")
>> .schema(userSchema) // Specify schema of the parquet files
>> .csv("hdfs://192.168.23.107:9000/structuredStreaming/")
>> csvDF.show
>>
>> When I run this piece of code, following exception is raised.
>>
>> Exception in thread "main" java.lang.IllegalArgumentException:
>> java.net.URISyntaxException: Relative path in absolute URI:
>> file:E:/Scala-Eclips/workspace/spark2/spark-warehouse
>> at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>> at org.apache.hadoop.fs.Path.<init>(Path.java:172)
>> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQua
>> lifiedPath(SessionCatalog.scala:114)
>> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createD
>> atabase(SessionCatalog.scala:145)
>> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(
>> SessionCatalog.scala:89)
>> at org.apache.spark.sql.internal.SessionState.catalog$lzycomput
>> e(SessionState.scala:95)
>> at org.apache.spark.sql.internal.SessionState.catalog(SessionSt
>> ate.scala:95)
>> at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(Se
>> ssionState.scala:112)
>> at org.apache.spark.sql.internal.SessionState.analyzer$lzycompu
>> te(SessionState.scala:112)
>> at org.apache.spark.sql.internal.SessionState.analyzer(SessionS
>> tate.scala:111)
>> at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed
>> (QueryExecution.scala:49)
>> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>> at org.apache.spark.sql.streaming.DataStreamReader.load(
>> DataStreamReader.scala:142)
>> at org.apache.spark.sql.streaming.DataStreamReader.load(
>> DataStreamReader.scala:153)
>> at org.apache.spark.sql.streaming.DataStreamReader.csv(
>> DataStreamReader.scala:251)
>> at com.platalytics.spark.two.test.App$.main(App.scala:22)
>> at com.platalytics.spark.two.test.App.main(App.scala)
>>
>>
>> Please guide me in this regard.
>>
>> Thanks
>>
>

Reply via email to