Please provide the spark version, the environment you are running (on-prem, cloud etc), state if you are running in YARN etc and your spark-submit parameters.
Have you checked spark UI default on port 4040 under stages and executor tabs HTH view my Linkedin profile <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/> *Disclaimer:* Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction. On Thu, 3 Jun 2021 at 10:51, Subash Prabanantham <subashpraba...@gmail.com> wrote: > Hi Team, > > I am trying to understand how to estimate Kube cpu with respect to Spark > executor cores. > > For example, > Job configuration: (given to start) > cores/executor = 4 > # of executors = 240 > > > But the allocated resources when we ran job are as follows, > cores/executor = 4 > # of executors = 47 > > So the question, at the time of taking the screenshot 60 tasks were > running in parallel, > [image: Screenshot 2021-06-03 at 10.37.08.png] > (Apologies since the screenshot was taken terminal in top) > > 188 cores are allocated with 60 tasks running currently. > > Now we I took the quota for the namespace, I got the below, > > [image: Screenshot 2021-06-03 at 10.36.06.png] > > How do I read 5290m == 5.29 CPU and limits == 97 with that of 60 tasks > running in parallel ? > > Say for acquiring 512 cores (Spark executors total) what would be the > configuration for Kube requests.cpu and limits.cpu ? > > > Thanks, > Subash >