Well if the system doesn't change, then the data must be different. The
exact exception probably won't be helpful since it only tells us the last
allocation that failed. My guess is that your ingestion changed and there
is either now slightly more data than previously or it's skewed
differently. On
please help.
Thanks
Amit
On Mon, Nov 9, 2020 at 4:18 PM Amit Sharma wrote:
> Please find below the exact exception
>
> Exception in thread "streaming-job-executor-3" java.lang.OutOfMemoryError:
> Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3332)
> at
> java.lang.Ab
Please find below the exact exception
Exception in thread "streaming-job-executor-3" java.lang.OutOfMemoryError:
Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at
java.la
Can you please help.
Thanks
Amit
On Sun, Nov 8, 2020 at 1:35 PM Amit Sharma wrote:
> Hi , I am using 16 nodes spark cluster with below config
> 1. Executor memory 8 GB
> 2. 5 cores per executor
> 3. Driver memory 12 GB.
>
>
> We have streaming job. We do not see problem but sometimes we get
>
Hi , I am using 16 nodes spark cluster with below config
1. Executor memory 8 GB
2. 5 cores per executor
3. Driver memory 12 GB.
We have streaming job. We do not see problem but sometimes we get exception
executor-1 heap memory issue. I am not understanding if data size is same
and this job rece
It depends on how much memory is available and how much data you are
processing. Please provide data size and cluster details to help.
On Fri, Aug 14, 2020 at 12:54 AM km.santanu wrote:
> Hi
> I am using Kafka stateless structure streaming.i have enabled watermark as
> 1
> hour.after long runnin
Hi
I am using Kafka stateless structure streaming.i have enabled watermark as 1
hour.after long running about 2 hour my job is terminating
automatically.check point has been enabled.
I am doing average on input data.
Can you please suggest how to avoid out of memory error
--
Sent from: http://ap
Any idea about this?
From: Kürşat Kurt [mailto:kur...@kursatkurt.com]
Sent: Sunday, October 30, 2016 7:59 AM
To: 'Jörn Franke'
Cc: 'user@spark.apache.org'
Subject: RE: Out Of Memory issue
Hi Jörn;
I am reading 300.000 line csv file. It is “ß” seperated(attached
What is the size and format of the input data?
Can you provide more details on your Spark job? Rdd? Dataframe? Etc. Java
Version? Is this a single node? It seems your executors and os do not get a lot
of memory
> On 29 Oct 2016, at 22:51, Kürşat Kurt wrote:
>
> Hi;
>
> While training NaiveBay
Hi;
While training NaiveBayes classification, i am getting OOM.
What is wrong with these parameters?
Here is the spark-submit command: ./spark-submit --class main.scala.Test1
--master local[*] --driver-memory 60g /home/user1/project_2.11-1.0.jar
Ps: Os is Ubuntu 14.04 and system has 64GB
gt; It's definitely worth setting spark.memory.fraction and
> parquet.memory.pool.ratio and trying again.
>
> Ewan
>
> -Original Message-
> From: babloo80 [mailto:bablo...@gmail.com]
> Sent: 06 January 2016 03:44
> To: user@spark.apache.org
> Subject: Out of memory issue
&g
ginal Message-
From: babloo80 [mailto:bablo...@gmail.com]
Sent: 06 January 2016 03:44
To: user@spark.apache.org
Subject: Out of memory issue
Hello there,
I have a spark job reads 7 parquet files (8 GB, 3 x 16 GB, 3 x 14 GB) in
different stages of execution and creates a result parquet of
Muthu
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Out-of-memory-issue-tp25888.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail:
13 matches
Mail list logo