l
>
>
>
> *From: *Gourav Sengupta
> *Date: *Saturday, March 5, 2022 at 1:59 AM
> *To: *Anil Dasari
> *Cc: *Yang,Jie(INF) , user@spark.apache.org <
> user@spark.apache.org>
> *Subject: *Re: {EXT} Re: Spark Parquet write OOM
>
> Hi Anil,
>
>
>
> any
, 2022 at 1:59 AM
To: Anil Dasari
Cc: Yang,Jie(INF) , user@spark.apache.org
Subject: Re: {EXT} Re: Spark Parquet write OOM
Hi Anil,
any chance you tried setting the limit on the number of records to be written
out at a time?
Regards,
Gourav
On Thu, Mar 3, 2022 at 3:12 PM Anil Dasari
mailto:adas
;
>
>
> Regards
>
>
>
> *From: *Gourav Sengupta
> *Date: *Thursday, March 3, 2022 at 2:24 AM
> *To: *Anil Dasari
> *Cc: *Yang,Jie(INF) , user@spark.apache.org <
> user@spark.apache.org>
> *Subject: *Re: {EXT} Re: Spark Parquet write OOM
>
> Hi,
>
Hi Gourav,
Tried increasing shuffle partitions number and higher executor memory. Both
didn’t work.
Regards
From: Gourav Sengupta
Date: Thursday, March 3, 2022 at 2:24 AM
To: Anil Dasari
Cc: Yang,Jie(INF) , user@spark.apache.org
Subject: Re: {EXT} Re: Spark Parquet write OOM
Hi,
I do not
c: *Yang,Jie(INF) , user@spark.apache.org <
> user@spark.apache.org>
> *Subject: *Re: {EXT} Re: Spark Parquet write OOM
>
> Hi Anil,
>
>
>
> I was trying to work out things for a while yesterday, but may need your
> kind help.
>
>
>
> Can you please shar
Answers in the context. Thanks.
From: Gourav Sengupta
Date: Thursday, March 3, 2022 at 12:13 AM
To: Anil Dasari
Cc: Yang,Jie(INF) , user@spark.apache.org
Subject: Re: {EXT} Re: Spark Parquet write OOM
Hi Anil,
I was trying to work out things for a while yesterday, but may need your kind
t 7:00 AM
> *To: *Gourav Sengupta , Yang,Jie(INF) <
> yangji...@baidu.com>
> *Cc: *user@spark.apache.org
> *Subject: *Re: {EXT} Re: Spark Parquet write OOM
>
> Hi Gourav and Yang
>
> Thanks for the response.
>
>
>
> Please find the answers below.
>
>
2nd attempt..
Any suggestions to troubleshoot and fix the problem ? thanks in advance.
Regards,
Anil
From: Anil Dasari
Date: Wednesday, March 2, 2022 at 7:00 AM
To: Gourav Sengupta , Yang,Jie(INF)
Cc: user@spark.apache.org
Subject: Re: {EXT} Re: Spark Parquet write OOM
Hi Gourav and Yang
@spark.apache.org
Subject: {EXT} Re: Spark Parquet write OOM
Hi Anil,
before jumping to the quick symptomatic fix, can we try to understand the
issues?
1. What is the version of SPARK you are using?
2. Are you doing a lot of in-memory transformations like adding columns, or
running joins, or
the length of memory to be allocated, because
> `-XX:MaxDirectMemorySize` and `-Xmx` should have the same capacity by
> default.
>
>
>
>
>
> *发件人**: *Anil Dasari
> *日期**: *2022年3月2日 星期三 09:45
> *收件人**: *"user@spark.apache.org"
> *主题**: *Spark Parquet write OOM
hould have the same capacity by default.
发件人: Anil Dasari
日期: 2022年3月2日 星期三 09:45
收件人: "user@spark.apache.org"
主题: Spark Parquet write OOM
Hello everyone,
We are writing Spark Data frame to s3 in parquet and it is failing with below
exception.
I wanted to try following to
Hello everyone,
We are writing Spark Data frame to s3 in parquet and it is failing with below
exception.
I wanted to try following to avoid OOM
1. increase the default sql shuffle partitions to reduce load on parquet
writer tasks to avoid OOM and
2. Increase user memory (reduce memory f
12 matches
Mail list logo