I have got about 25 separated gzipped log files per hour. File sizes is
very different, from 10MB to 50MB of gzipped JSON data. So, i'am convert
this data in parquet each hour. Code very simple on python:
text_file = sc.textFile(src_file)
df = sqlCtx.jsonRDD(text_file.map(lambda x:
x.split('\t
What is your data size, the algorithm and the expected time?
Depending on this the group can recommend you optimizations or tell you that
the expectations are wrong
> On 20 Jan 2016, at 18:24, Pavel Plotnikov
> wrote:
>
> Thanks, Akhil! It helps, but this jobs still not fast enough, maybe i mi
It would be good if you can share the code, someone here or I can guide you
better if you can post the code snippet.
Thanks
Best Regards
On Wed, Jan 20, 2016 at 10:54 PM, Pavel Plotnikov <
pavel.plotni...@team.wrike.com> wrote:
> Thanks, Akhil! It helps, but this jobs still not fast enough, mayb
Thanks, Akhil! It helps, but this jobs still not fast enough, maybe i
missed something
Regards,
Pavel
On Wed, Jan 20, 2016 at 9:51 AM Akhil Das
wrote:
> Did you try re-partitioning the data before doing the write?
>
> Thanks
> Best Regards
>
> On Tue, Jan 19, 2016 at 6:13 PM, Pavel Plotnikov <
Did you try re-partitioning the data before doing the write?
Thanks
Best Regards
On Tue, Jan 19, 2016 at 6:13 PM, Pavel Plotnikov <
pavel.plotni...@team.wrike.com> wrote:
> Hello,
> I'm using spark on some machines in standalone mode, data storage is
> mounted on this machines via nfs. A have in