if you want to use true Spark Streaming (not the same as Hadoop
Streaming/Piping, as Mayur pointed out), you can use the DStream.union()
method as described in the following docs:
http://spark.apache.org/docs/0.9.1/streaming-custom-receivers.html
http://spark.apache.org/docs/0.9.1/streaming-progra
File as a stream?
I think you are confusing Spark Streaming with buffer reader. Spark
streaming is meant to process batches of data (files, packets, messages) as
they come in, infact utilizing time of packet reception as a way to create
windows etc.
In your case you are better off reading the file