: use big files and read from HDFS was: performance problem when
reading lots of small files created by spark streaming.
> Hi Pedro
>
> I did some experiments. I using one of our relatively small data set. The
> data set is loaded into 3 or 4 data frames. I then call count()
&g
Subject: Re: performance problem when reading lots of small files created
by spark streaming.
> Hi Pedro
>
> Thanks for the explanation. I started watching your repo. In the short term I
> think I am going to try concatenating my small files into 64MB and using HDFS.
> My spark strea
cutionException e) {
>
> logger.error("", e);
>
> }
>
> }
>
> }
>
>
> static class SaveData {
>
> private DataFrame df;
>
> private String path;
>
>
> SaveData(DataFrame df, String path) {
>
> this.df = df;
>
> this.path
te().json(data.path);
}
}
}
}
From: Pedro Rodriguez
Date: Wednesday, July 27, 2016 at 8:40 PM
To: Andrew Davidson
Cc: "user @spark"
Subject: Re: performance problem when reading lots of small files created
by spark streaming.
> There are a few blog posts that detail one p
There are a few blog posts that detail one possible/likely issue for
example:
http://tech.kinja.com/how-not-to-pull-from-s3-using-apache-spark-1704509219
TLDR: The hadoop libraries spark uses assumes that its input comes from a
file system (works with HDFS) however S3 is a key value store, not a
I have a relatively small data set however it is split into many small JSON
files. Each file is between maybe 4K and 400K
This is probably a very common issue for anyone using spark streaming. My
streaming app works fine, how ever my batch application takes several hours
to run.
All I am doing is