Hi Team,

I am trying to understand how to estimate Kube cpu with respect to Spark
executor cores.

For example,
Job configuration: (given to start)
cores/executor = 4
# of executors = 240


But the allocated resources when we ran job are as follows,
cores/executor = 4
# of executors = 47

So the question, at the time of taking the screenshot 60 tasks were running
in parallel,
[image: Screenshot 2021-06-03 at 10.37.08.png]
(Apologies since the screenshot was taken terminal in top)

188 cores are allocated with 60 tasks running currently.

Now we I took the quota for the namespace, I got the below,

[image: Screenshot 2021-06-03 at 10.36.06.png]

How do I read 5290m == 5.29 CPU and limits == 97 with that of 60 tasks
running in parallel ?

Say for acquiring 512 cores (Spark executors total) what would be the
configuration for Kube requests.cpu and limits.cpu ?


Thanks,
Subash

Reply via email to