From: Yosr Kchaou
Sent: Wednesday, December 7, 2022 10:19 AM
To: user@spark.apache.org
Subject: [EXTERNAL] [SPARK Memory management] Does Spark support setting
limits/requests for driver/executor memory ?
ATTENTION: This email originated from outside of GM.
Hello,
We are running Spark on Kuber
Hello,
We are running Spark on Kubernetes and noticed that driver/executors use
the same value for memory request and memory limit. We see that
limits/requests can be set only for cpu using the following options:
spark.kubernetes.{driver/executor}.limit.cores and
spark.kubernetes.{driver/executor}
math.max(totalMb - 1024, 512)
}
-- Original ------
From: "swaranga";;
Date: Fri, May 22, 2015 03:31 PM
To: "user";
Subject: Spark Memory management
Experts,
This is an academic question. Since Spark runs on the JVM, how it is able to
do th
nks for any inputs.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Memory-management-tp22992.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>
memory? How
accurate are these calculations?
Thanks for any inputs.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Memory-management-tp22992.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Hey Gary,
The answer to both of your questions is that much of it is up to the
application.
For (1), the standalone master can set "spark.deploy.defaultCores" to limit
the number of cores each application can grab. However, the application can
override this with the applications-specific "spark.c
I have a few questions about managing Spark memory:
1) In a standalone setup, is their any cpu prioritization across users
running jobs? If so, what is the behavior here?
2) With Spark 1.1, users will more easily be able to run drivers/shells
from remote locations that do not cause firewall head