apache.org
Subject: Re: [Worker Crashing] OutOfMemoryError: GC overhead limit execeeded
Yea we also didn't find anything related to this online.
Are you aware of any memory leaks in worker in 1.6.2 spark which might be
causing this ?
Do you know of any documentation which explains all the tasks t
t; *Sent:* Friday, March 24, 2017 9:15 AM
> *To:* Yong Zhang
> *Cc:* user@spark.apache.org
> *Subject:* Re: [Worker Crashing] OutOfMemoryError: GC overhead limit
> execeeded
>
> Thank you for the response.
>
> Yes, I am sure because the driver was working fine. Only 2 workers
: [Worker Crashing] OutOfMemoryError: GC overhead limit execeeded
Thank you for the response.
Yes, I am sure because the driver was working fine. Only 2 workers went down
with OOM.
Regards,
Behroz
On Fri, Mar 24, 2017 at 2:12 PM, Yong Zhang
mailto:java8...@hotmail.com>> wrote:
I am not 100% sur
re you sure your workers OOM?
>
>
> Yong
>
>
> --
> *From:* bsikander
> *Sent:* Friday, March 24, 2017 5:48 AM
> *To:* user@spark.apache.org
> *Subject:* [Worker Crashing] OutOfMemoryError: GC overhead limit execeeded
>
> Spark version: 1.6.2
How can we avoid that in future.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Worker-Crashing-OutOfMemoryError-GC-overhead-limit-execeeded-tp28535.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
ry ?
How can we avoid that in future.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Worker-Crashing-OutOfMemoryError-GC-overhead-limit-execeeded-tp28535.html
Sent from the Apache Spark User List mailing list archiv
Hello,
Spark version: 1.6.2
Hadoop: 2.6.0
Cluster:
All VMS are deployed on AWS.
1 Master (t2.large)
1 Secondary Master (t2.large)
5 Workers (m4.xlarge)
Zookeeper (t2.large)
Recently, 2 of our workers went down with out of memory exception.
> java.lang.OutOfMemoryError: GC overhead limit exceeded