Sent from my iPhone
-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
kalyan
> Date 02/6/2024 10:08
> To Jay Han
> Cc Ashish Singh ,
> Mridul Muralidharan ,
> dev ,
>
>
> Subject Re: [Spark-Core] Improving Reliability of spark when Executors
> OOM
> Hey,
> Disk space not enough is also a reliability concern, but might need
Hey all,
Thanks for this discussion, the timing of this couldn't be better!
At Pinterest, we recently started to look into reducing OOM failures while
also reducing memory consumption of spark applications. We considered the
following options.
1. Changing core count on executor to change memory a
Hi All ,
Just wanted to know if there is any work around or resolution for below
issue in Stand alone mode
https://issues.apache.org/jira/browse/SPARK-9559
Ashish
There is a property you need to set which is
spark.driver.allowMultipleContexts=true
Ashish
On Wed, Jan 27, 2016 at 1:39 PM, Jakob Odersky wrote:
> A while ago, I remember reading that multiple active Spark contexts
> per JVM was a possible future enhancement.
> I was wondering i
effort, both within Spark and around
interfacing with Yarn, but just trying to emphasise that a single node leading
to full application restart, does not seem right for a long running service.
Thoughts?
Regards,
Ashish
From: Steve Loughran mailto:ste...@hortonworks.com>>
Date: Thursd
If anyone else is also facing similar problems and have figured out a good
workaround within the current design, then please share.
Regards,
Ashish
From: Ashish Rawat mailto:ashish.ra...@guavus.com>>
Date: Thursday, 27 August 2015 1:12 pm
To: "dev@spark.apache.org<mailto:dev@sp
hts on this issue and possible future
directions.
Regards,
Ashish