missioned executor details so that
> spark can ignore intermittent shuffle fetch failures.
>
> Some of these are best effort, you could also tune number of threads
> needed for decommissioning etc based on your workload and run environment.
>
> On Thu, 27 Jun 2024, 09:03 Rajesh Mahi
based on your workload and run environment.
On Thu, 27 Jun 2024, 09:03 Rajesh Mahindra, wrote:
> Hi folks,
>
> I am planning to leverage the "Spark Decommission" feature in production
> since our company uses SPOT instances on Kubernetes. I wanted to get a
> sense of
Hi folks,
I am planning to leverage the "Spark Decommission" feature in production
since our company uses SPOT instances on Kubernetes. I wanted to get a
sense of how stable the feature is for production usage and if any one has
thoughts around trying it out in production, esp