thanks bethesda!
But if we have structure like this
a/b/a.txt
a/c/c.txt
a/d/e/e.txt
then how can we handle this case?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/reading-files-recursively-using-spark-tp20782p20785.html
Sent from the Apache Spark User
On hdfs I created:
/one/one.txt # contains text "one"
/one/two/two.txt # contains text "two"
Then:
val data = sc.textFile("/one/*")
data.collect
This returned:
Array(one, two)
So the above path designation appears to automatically recurse for you.
--
View this message in context:
http
Hi,
You can use FileInputformat API of Hadoop and newApiHadoopFile of spark to
get recursion. More on the topic you can refer here
http://stackoverflow.com/questions/8114579/using-fileinputformat-addinputpaths-to-recursively-add-hdfs-path
On Fri, Dec 19, 2014 at 4:50 PM, Sean Owen wrote:
>
> How
How about using the HDFS API to create a list of all the directories
to read from, and passing them as a comma-joined string to
sc.textFile?
On Fri, Dec 19, 2014 at 11:13 AM, Hafiz Mujadid
wrote:
> Hi experts!
>
> what is efficient way to read all files using spark from directory and its
> sub-di